Test Report: Docker_Linux_crio_arm64 22353

                    
                      dccbb7bb926f2ef30a57d8898bfc971889daa155:2025-12-29:43039
                    
                

Test fail (27/332)

x
+
TestAddons/serial/Volcano (0.39s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable volcano --alsologtostderr -v=1: exit status 11 (389.694276ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:11:42.506661  748653 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:11:42.507685  748653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:11:42.507727  748653 out.go:374] Setting ErrFile to fd 2...
	I1229 07:11:42.507759  748653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:11:42.508032  748653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:11:42.508394  748653 mustload.go:66] Loading cluster: addons-707536
	I1229 07:11:42.508947  748653 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:11:42.508998  748653 addons.go:622] checking whether the cluster is paused
	I1229 07:11:42.509140  748653 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:11:42.509177  748653 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:11:42.509735  748653 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:11:42.528385  748653 ssh_runner.go:195] Run: systemctl --version
	I1229 07:11:42.528454  748653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:11:42.546282  748653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:11:42.655875  748653 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:11:42.656006  748653 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:11:42.698197  748653 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:11:42.698269  748653 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:11:42.698287  748653 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:11:42.698306  748653 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:11:42.698326  748653 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:11:42.698360  748653 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:11:42.698379  748653 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:11:42.698400  748653 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:11:42.698421  748653 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:11:42.698449  748653 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:11:42.698467  748653 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:11:42.698487  748653 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:11:42.698506  748653 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:11:42.698546  748653 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:11:42.698565  748653 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:11:42.698618  748653 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:11:42.698645  748653 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:11:42.698668  748653 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:11:42.698699  748653 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:11:42.698718  748653 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:11:42.698750  748653 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:11:42.698781  748653 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:11:42.698801  748653 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:11:42.698821  748653 cri.go:96] found id: ""
	I1229 07:11:42.698906  748653 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:11:42.715120  748653 out.go:203] 
	W1229 07:11:42.718048  748653 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:11:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:11:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:11:42.718139  748653 out.go:285] * 
	* 
	W1229 07:11:42.809043  748653 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:11:42.812053  748653 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.39s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 12.40891ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-ts9v4" [7bccdd84-e8c8-430e-bfab-86209f56007d] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003591527s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-pshgp" [024cd990-58df-496e-9f8c-4d81503eb24e] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.009550632s
addons_test.go:394: (dbg) Run:  kubectl --context addons-707536 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-707536 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-707536 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.66462997s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 ip
2025/12/29 07:12:08 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable registry --alsologtostderr -v=1: exit status 11 (268.139249ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:12:09.028949  749160 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:12:09.029728  749160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:09.029746  749160 out.go:374] Setting ErrFile to fd 2...
	I1229 07:12:09.029753  749160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:09.030033  749160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:12:09.030372  749160 mustload.go:66] Loading cluster: addons-707536
	I1229 07:12:09.032170  749160 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:09.032227  749160 addons.go:622] checking whether the cluster is paused
	I1229 07:12:09.032393  749160 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:09.032426  749160 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:12:09.033060  749160 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:12:09.050564  749160 ssh_runner.go:195] Run: systemctl --version
	I1229 07:12:09.050633  749160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:12:09.076180  749160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:12:09.183424  749160 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:12:09.183504  749160 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:12:09.214004  749160 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:12:09.214075  749160 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:12:09.214094  749160 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:12:09.214114  749160 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:12:09.214140  749160 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:12:09.214166  749160 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:12:09.214184  749160 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:12:09.214202  749160 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:12:09.214225  749160 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:12:09.214254  749160 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:12:09.214277  749160 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:12:09.214299  749160 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:12:09.214322  749160 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:12:09.214361  749160 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:12:09.214382  749160 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:12:09.214413  749160 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:12:09.214433  749160 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:12:09.214454  749160 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:12:09.214474  749160 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:12:09.214494  749160 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:12:09.214518  749160 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:12:09.214548  749160 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:12:09.214566  749160 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:12:09.214590  749160 cri.go:96] found id: ""
	I1229 07:12:09.214673  749160 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:12:09.230258  749160 out.go:203] 
	W1229 07:12:09.233155  749160 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:12:09.233184  749160 out.go:285] * 
	* 
	W1229 07:12:09.236385  749160 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:12:09.239399  749160 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (17.19s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 5.61169ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-707536
addons_test.go:334: (dbg) Run:  kubectl --context addons-707536 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (255.674653ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:12:49.205539  751093 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:12:49.206364  751093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:49.206410  751093 out.go:374] Setting ErrFile to fd 2...
	I1229 07:12:49.206433  751093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:49.206876  751093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:12:49.207315  751093 mustload.go:66] Loading cluster: addons-707536
	I1229 07:12:49.208034  751093 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:49.208086  751093 addons.go:622] checking whether the cluster is paused
	I1229 07:12:49.208263  751093 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:49.208295  751093 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:12:49.209323  751093 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:12:49.227220  751093 ssh_runner.go:195] Run: systemctl --version
	I1229 07:12:49.227298  751093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:12:49.246345  751093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:12:49.351645  751093 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:12:49.351735  751093 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:12:49.382363  751093 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:12:49.382388  751093 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:12:49.382393  751093 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:12:49.382397  751093 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:12:49.382401  751093 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:12:49.382404  751093 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:12:49.382407  751093 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:12:49.382428  751093 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:12:49.382438  751093 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:12:49.382445  751093 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:12:49.382448  751093 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:12:49.382452  751093 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:12:49.382455  751093 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:12:49.382458  751093 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:12:49.382462  751093 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:12:49.382473  751093 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:12:49.382477  751093 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:12:49.382481  751093 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:12:49.382484  751093 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:12:49.382487  751093 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:12:49.382505  751093 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:12:49.382525  751093 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:12:49.382529  751093 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:12:49.382545  751093 cri.go:96] found id: ""
	I1229 07:12:49.382603  751093 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:12:49.399063  751093 out.go:203] 
	W1229 07:12:49.402023  751093 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:12:49.402053  751093 out.go:285] * 
	* 
	W1229 07:12:49.405650  751093 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:12:49.408637  751093 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-707536 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-707536 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-707536 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [788a9e4d-83f3-47e5-b1a6-039bf824fea0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [788a9e4d-83f3-47e5-b1a6-039bf824fea0] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003601873s
I1229 07:12:54.967222  741854 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-707536 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (274.828229ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:12:56.027196  751353 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:12:56.027892  751353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:56.027907  751353 out.go:374] Setting ErrFile to fd 2...
	I1229 07:12:56.027913  751353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:56.028211  751353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:12:56.028524  751353 mustload.go:66] Loading cluster: addons-707536
	I1229 07:12:56.028984  751353 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:56.029010  751353 addons.go:622] checking whether the cluster is paused
	I1229 07:12:56.029133  751353 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:56.029150  751353 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:12:56.029712  751353 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:12:56.047201  751353 ssh_runner.go:195] Run: systemctl --version
	I1229 07:12:56.047256  751353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:12:56.072495  751353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:12:56.180228  751353 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:12:56.180328  751353 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:12:56.218748  751353 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:12:56.218819  751353 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:12:56.218841  751353 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:12:56.218863  751353 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:12:56.218899  751353 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:12:56.218923  751353 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:12:56.218944  751353 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:12:56.219003  751353 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:12:56.219026  751353 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:12:56.219066  751353 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:12:56.219088  751353 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:12:56.219109  751353 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:12:56.219144  751353 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:12:56.219167  751353 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:12:56.219189  751353 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:12:56.219252  751353 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:12:56.219277  751353 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:12:56.219300  751353 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:12:56.219331  751353 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:12:56.219353  751353 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:12:56.219375  751353 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:12:56.219406  751353 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:12:56.219427  751353 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:12:56.219444  751353 cri.go:96] found id: ""
	I1229 07:12:56.219527  751353 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:12:56.236523  751353 out.go:203] 
	W1229 07:12:56.239590  751353 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:12:56.239626  751353 out.go:285] * 
	* 
	W1229 07:12:56.242884  751353 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:12:56.246023  751353 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable ingress --alsologtostderr -v=1: exit status 11 (288.871788ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:12:56.315700  751413 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:12:56.316407  751413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:56.316419  751413 out.go:374] Setting ErrFile to fd 2...
	I1229 07:12:56.316458  751413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:56.316726  751413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:12:56.317117  751413 mustload.go:66] Loading cluster: addons-707536
	I1229 07:12:56.323391  751413 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:56.323430  751413 addons.go:622] checking whether the cluster is paused
	I1229 07:12:56.323571  751413 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:56.323583  751413 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:12:56.324308  751413 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:12:56.347187  751413 ssh_runner.go:195] Run: systemctl --version
	I1229 07:12:56.347250  751413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:12:56.368434  751413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:12:56.476616  751413 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:12:56.476696  751413 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:12:56.508087  751413 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:12:56.508110  751413 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:12:56.508115  751413 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:12:56.508119  751413 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:12:56.508122  751413 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:12:56.508125  751413 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:12:56.508129  751413 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:12:56.508132  751413 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:12:56.508141  751413 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:12:56.508147  751413 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:12:56.508151  751413 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:12:56.508154  751413 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:12:56.508157  751413 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:12:56.508161  751413 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:12:56.508164  751413 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:12:56.508170  751413 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:12:56.508173  751413 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:12:56.508176  751413 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:12:56.508179  751413 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:12:56.508182  751413 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:12:56.508187  751413 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:12:56.508189  751413 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:12:56.508192  751413 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:12:56.508195  751413 cri.go:96] found id: ""
	I1229 07:12:56.508245  751413 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:12:56.526214  751413 out.go:203] 
	W1229 07:12:56.529450  751413 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:12:56.529470  751413 out.go:285] * 
	* 
	W1229 07:12:56.532762  751413 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:12:56.535680  751413 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (10.20s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-pp66d" [e220e979-ce5a-4ec6-88af-5ebd9471956d] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003911803s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (268.276237ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:12:39.756198  750597 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:12:39.756873  750597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:39.756891  750597 out.go:374] Setting ErrFile to fd 2...
	I1229 07:12:39.756897  750597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:39.757211  750597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:12:39.757532  750597 mustload.go:66] Loading cluster: addons-707536
	I1229 07:12:39.757956  750597 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:39.757983  750597 addons.go:622] checking whether the cluster is paused
	I1229 07:12:39.758162  750597 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:39.758183  750597 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:12:39.758756  750597 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:12:39.784024  750597 ssh_runner.go:195] Run: systemctl --version
	I1229 07:12:39.784084  750597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:12:39.803146  750597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:12:39.908024  750597 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:12:39.908114  750597 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:12:39.940934  750597 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:12:39.940970  750597 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:12:39.940977  750597 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:12:39.940980  750597 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:12:39.940983  750597 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:12:39.940988  750597 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:12:39.940991  750597 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:12:39.940994  750597 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:12:39.940998  750597 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:12:39.941006  750597 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:12:39.941013  750597 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:12:39.941016  750597 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:12:39.941019  750597 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:12:39.941023  750597 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:12:39.941026  750597 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:12:39.941039  750597 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:12:39.941042  750597 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:12:39.941047  750597 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:12:39.941053  750597 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:12:39.941056  750597 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:12:39.941061  750597 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:12:39.941072  750597 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:12:39.941076  750597 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:12:39.941079  750597 cri.go:96] found id: ""
	I1229 07:12:39.941136  750597 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:12:39.956933  750597 out.go:203] 
	W1229 07:12:39.959927  750597 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:12:39.959964  750597 out.go:285] * 
	* 
	W1229 07:12:39.963355  750597 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:12:39.966339  750597 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.211832ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-gv7xc" [96e7d547-d50c-4767-8c1c-482d040cb491] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003018893s
addons_test.go:465: (dbg) Run:  kubectl --context addons-707536 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (271.058377ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:12:46.119300  750730 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:12:46.120156  750730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:46.120219  750730 out.go:374] Setting ErrFile to fd 2...
	I1229 07:12:46.120242  750730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:46.120754  750730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:12:46.121400  750730 mustload.go:66] Loading cluster: addons-707536
	I1229 07:12:46.123325  750730 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:46.123368  750730 addons.go:622] checking whether the cluster is paused
	I1229 07:12:46.123525  750730 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:46.123544  750730 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:12:46.124072  750730 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:12:46.143598  750730 ssh_runner.go:195] Run: systemctl --version
	I1229 07:12:46.143656  750730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:12:46.167995  750730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:12:46.275786  750730 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:12:46.275884  750730 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:12:46.312378  750730 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:12:46.312405  750730 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:12:46.312411  750730 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:12:46.312415  750730 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:12:46.312418  750730 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:12:46.312422  750730 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:12:46.312425  750730 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:12:46.312428  750730 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:12:46.312430  750730 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:12:46.312444  750730 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:12:46.312456  750730 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:12:46.312461  750730 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:12:46.312464  750730 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:12:46.312467  750730 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:12:46.312470  750730 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:12:46.312475  750730 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:12:46.312478  750730 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:12:46.312483  750730 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:12:46.312492  750730 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:12:46.312501  750730 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:12:46.312506  750730 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:12:46.312509  750730 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:12:46.312513  750730 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:12:46.312516  750730 cri.go:96] found id: ""
	I1229 07:12:46.312579  750730 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:12:46.329032  750730 out.go:203] 
	W1229 07:12:46.333025  750730 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:12:46.333055  750730 out.go:285] * 
	* 
	W1229 07:12:46.336288  750730 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:12:46.339213  750730 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1229 07:11:58.328933  741854 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1229 07:11:58.332414  741854 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1229 07:11:58.332442  741854 kapi.go:107] duration metric: took 3.534139ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.544199ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-707536 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-707536 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [21614047-7518-4bd4-b89e-f857f65461b3] Pending
helpers_test.go:353: "task-pv-pod" [21614047-7518-4bd4-b89e-f857f65461b3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [21614047-7518-4bd4-b89e-f857f65461b3] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003238934s
addons_test.go:574: (dbg) Run:  kubectl --context addons-707536 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-707536 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-707536 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-707536 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-707536 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-707536 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-707536 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [8c7e1d7e-a544-4a98-9e19-ca0d20c4ba4b] Pending
helpers_test.go:353: "task-pv-pod-restore" [8c7e1d7e-a544-4a98-9e19-ca0d20c4ba4b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [8c7e1d7e-a544-4a98-9e19-ca0d20c4ba4b] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00423237s
addons_test.go:616: (dbg) Run:  kubectl --context addons-707536 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-707536 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-707536 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (270.420398ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:12:48.416361  750982 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:12:48.417254  750982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:48.417297  750982 out.go:374] Setting ErrFile to fd 2...
	I1229 07:12:48.417318  750982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:48.417633  750982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:12:48.418019  750982 mustload.go:66] Loading cluster: addons-707536
	I1229 07:12:48.418585  750982 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:48.418640  750982 addons.go:622] checking whether the cluster is paused
	I1229 07:12:48.418815  750982 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:48.418850  750982 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:12:48.419480  750982 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:12:48.437760  750982 ssh_runner.go:195] Run: systemctl --version
	I1229 07:12:48.437823  750982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:12:48.458585  750982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:12:48.563806  750982 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:12:48.563891  750982 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:12:48.606364  750982 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:12:48.606389  750982 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:12:48.606394  750982 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:12:48.606398  750982 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:12:48.606402  750982 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:12:48.606405  750982 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:12:48.606409  750982 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:12:48.606412  750982 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:12:48.606415  750982 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:12:48.606421  750982 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:12:48.606425  750982 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:12:48.606428  750982 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:12:48.606431  750982 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:12:48.606434  750982 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:12:48.606437  750982 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:12:48.606443  750982 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:12:48.606447  750982 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:12:48.606451  750982 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:12:48.606454  750982 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:12:48.606457  750982 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:12:48.606462  750982 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:12:48.606465  750982 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:12:48.606468  750982 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:12:48.606484  750982 cri.go:96] found id: ""
	I1229 07:12:48.606549  750982 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:12:48.624585  750982 out.go:203] 
	W1229 07:12:48.627647  750982 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:12:48.627713  750982 out.go:285] * 
	* 
	W1229 07:12:48.631332  750982 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:12:48.634828  750982 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (284.230635ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:12:48.709572  751038 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:12:48.710299  751038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:48.710315  751038 out.go:374] Setting ErrFile to fd 2...
	I1229 07:12:48.710321  751038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:48.710749  751038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:12:48.711169  751038 mustload.go:66] Loading cluster: addons-707536
	I1229 07:12:48.711836  751038 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:48.711864  751038 addons.go:622] checking whether the cluster is paused
	I1229 07:12:48.711998  751038 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:48.712017  751038 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:12:48.713033  751038 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:12:48.730120  751038 ssh_runner.go:195] Run: systemctl --version
	I1229 07:12:48.730198  751038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:12:48.749370  751038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:12:48.855838  751038 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:12:48.855926  751038 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:12:48.886370  751038 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:12:48.886393  751038 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:12:48.886398  751038 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:12:48.886403  751038 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:12:48.886406  751038 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:12:48.886410  751038 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:12:48.886413  751038 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:12:48.886417  751038 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:12:48.886420  751038 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:12:48.886427  751038 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:12:48.886430  751038 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:12:48.886433  751038 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:12:48.886437  751038 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:12:48.886440  751038 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:12:48.886443  751038 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:12:48.886450  751038 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:12:48.886453  751038 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:12:48.886457  751038 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:12:48.886460  751038 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:12:48.886463  751038 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:12:48.886468  751038 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:12:48.886475  751038 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:12:48.886484  751038 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:12:48.886490  751038 cri.go:96] found id: ""
	I1229 07:12:48.886556  751038 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:12:48.909817  751038 out.go:203] 
	W1229 07:12:48.912813  751038 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:12:48.912841  751038 out.go:285] * 
	* 
	W1229 07:12:48.916131  751038 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:12:48.919210  751038 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (50.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-707536 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-707536 --alsologtostderr -v=1: exit status 11 (274.380469ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:12:30.549954  750017 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:12:30.550826  750017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:30.550883  750017 out.go:374] Setting ErrFile to fd 2...
	I1229 07:12:30.550907  750017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:30.551314  750017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:12:30.552426  750017 mustload.go:66] Loading cluster: addons-707536
	I1229 07:12:30.553067  750017 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:30.553098  750017 addons.go:622] checking whether the cluster is paused
	I1229 07:12:30.553318  750017 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:30.553341  750017 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:12:30.553919  750017 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:12:30.571252  750017 ssh_runner.go:195] Run: systemctl --version
	I1229 07:12:30.571318  750017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:12:30.588950  750017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:12:30.699410  750017 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:12:30.699496  750017 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:12:30.742959  750017 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:12:30.742982  750017 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:12:30.742988  750017 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:12:30.742992  750017 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:12:30.742995  750017 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:12:30.742999  750017 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:12:30.743002  750017 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:12:30.743005  750017 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:12:30.743008  750017 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:12:30.743016  750017 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:12:30.743020  750017 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:12:30.743023  750017 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:12:30.743026  750017 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:12:30.743030  750017 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:12:30.743033  750017 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:12:30.743039  750017 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:12:30.743042  750017 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:12:30.743046  750017 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:12:30.743049  750017 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:12:30.743052  750017 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:12:30.743057  750017 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:12:30.743060  750017 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:12:30.743063  750017 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:12:30.743066  750017 cri.go:96] found id: ""
	I1229 07:12:30.743117  750017 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:12:30.757563  750017 out.go:203] 
	W1229 07:12:30.760548  750017 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:12:30.760576  750017 out.go:285] * 
	* 
	W1229 07:12:30.764166  750017 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:12:30.767268  750017 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-707536 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-707536
helpers_test.go:244: (dbg) docker inspect addons-707536:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d02f737bfd308fbb2d9ba4b48b06afb3e29f1714255f1e5f552431ef6f549ffa",
	        "Created": "2025-12-29T07:10:00.275125253Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 743014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:10:00.404450397Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/d02f737bfd308fbb2d9ba4b48b06afb3e29f1714255f1e5f552431ef6f549ffa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d02f737bfd308fbb2d9ba4b48b06afb3e29f1714255f1e5f552431ef6f549ffa/hostname",
	        "HostsPath": "/var/lib/docker/containers/d02f737bfd308fbb2d9ba4b48b06afb3e29f1714255f1e5f552431ef6f549ffa/hosts",
	        "LogPath": "/var/lib/docker/containers/d02f737bfd308fbb2d9ba4b48b06afb3e29f1714255f1e5f552431ef6f549ffa/d02f737bfd308fbb2d9ba4b48b06afb3e29f1714255f1e5f552431ef6f549ffa-json.log",
	        "Name": "/addons-707536",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-707536:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-707536",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d02f737bfd308fbb2d9ba4b48b06afb3e29f1714255f1e5f552431ef6f549ffa",
	                "LowerDir": "/var/lib/docker/overlay2/05013fa7d09a6f3460c5e5bcf007e7fda15004d2a8ea1b3c6f9502b0c0b2372f-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/05013fa7d09a6f3460c5e5bcf007e7fda15004d2a8ea1b3c6f9502b0c0b2372f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/05013fa7d09a6f3460c5e5bcf007e7fda15004d2a8ea1b3c6f9502b0c0b2372f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/05013fa7d09a6f3460c5e5bcf007e7fda15004d2a8ea1b3c6f9502b0c0b2372f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-707536",
	                "Source": "/var/lib/docker/volumes/addons-707536/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-707536",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-707536",
	                "name.minikube.sigs.k8s.io": "addons-707536",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0eb29b9167102de0ca096a3120fac434b155226b50b067fb0b7a00ab860ec18e",
	            "SandboxKey": "/var/run/docker/netns/0eb29b916710",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33535"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33536"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-707536": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:ec:52:d8:b8:68",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2eccb412bd8c94cf2d5d218406d8db7f255ded8d497cd1026a9da94c70de9736",
	                    "EndpointID": "bde13907ca30588b653f0412608b4d2e76ca5c2af3b21db7e134cb6eb2e3c2c8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-707536",
	                        "d02f737bfd30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-707536 -n addons-707536
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-707536 logs -n 25: (1.537164883s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-861890 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-861890   │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ delete  │ -p download-only-861890                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-861890   │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ start   │ -o=json --download-only -p download-only-544678 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-544678   │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ delete  │ -p download-only-544678                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-544678   │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ delete  │ -p download-only-861890                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-861890   │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ delete  │ -p download-only-544678                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-544678   │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ start   │ --download-only -p download-docker-846173 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-846173 │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │                     │
	│ delete  │ -p download-docker-846173                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-846173 │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ start   │ --download-only -p binary-mirror-084929 --alsologtostderr --binary-mirror http://127.0.0.1:44089 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-084929   │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │                     │
	│ delete  │ -p binary-mirror-084929                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-084929   │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ addons  │ enable dashboard -p addons-707536                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-707536          │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │                     │
	│ addons  │ disable dashboard -p addons-707536                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-707536          │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │                     │
	│ start   │ -p addons-707536 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-707536          │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:11 UTC │
	│ addons  │ addons-707536 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-707536          │ jenkins │ v1.37.0 │ 29 Dec 25 07:11 UTC │                     │
	│ addons  │ addons-707536 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-707536          │ jenkins │ v1.37.0 │ 29 Dec 25 07:11 UTC │                     │
	│ addons  │ addons-707536 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-707536          │ jenkins │ v1.37.0 │ 29 Dec 25 07:11 UTC │                     │
	│ ip      │ addons-707536 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-707536          │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ addons  │ addons-707536 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-707536          │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │                     │
	│ addons  │ addons-707536 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-707536          │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │                     │
	│ ssh     │ addons-707536 ssh cat /opt/local-path-provisioner/pvc-9b09faec-d460-4016-a891-cede84331133_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-707536          │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │ 29 Dec 25 07:12 UTC │
	│ addons  │ addons-707536 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-707536          │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │                     │
	│ addons  │ addons-707536 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-707536          │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │                     │
	│ addons  │ enable headlamp -p addons-707536 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-707536          │ jenkins │ v1.37.0 │ 29 Dec 25 07:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:09:35
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:09:35.067838  742618 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:09:35.067947  742618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:35.067958  742618 out.go:374] Setting ErrFile to fd 2...
	I1229 07:09:35.067967  742618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:35.068332  742618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:09:35.068892  742618 out.go:368] Setting JSON to false
	I1229 07:09:35.069704  742618 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13927,"bootTime":1766978248,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:09:35.069789  742618 start.go:143] virtualization:  
	I1229 07:09:35.071366  742618 out.go:179] * [addons-707536] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:09:35.072973  742618 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:09:35.073051  742618 notify.go:221] Checking for updates...
	I1229 07:09:35.074879  742618 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:09:35.076246  742618 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:09:35.077648  742618 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:09:35.078705  742618 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:09:35.080000  742618 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:09:35.081540  742618 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:09:35.103212  742618 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:09:35.103345  742618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:09:35.170858  742618 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-29 07:09:35.161853852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:09:35.170970  742618 docker.go:319] overlay module found
	I1229 07:09:35.172553  742618 out.go:179] * Using the docker driver based on user configuration
	I1229 07:09:35.173839  742618 start.go:309] selected driver: docker
	I1229 07:09:35.173862  742618 start.go:928] validating driver "docker" against <nil>
	I1229 07:09:35.173876  742618 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:09:35.174566  742618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:09:35.227659  742618 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-29 07:09:35.218589227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:09:35.227812  742618 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:09:35.228058  742618 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:09:35.229234  742618 out.go:179] * Using Docker driver with root privileges
	I1229 07:09:35.230353  742618 cni.go:84] Creating CNI manager for ""
	I1229 07:09:35.230423  742618 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:09:35.230437  742618 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:09:35.230509  742618 start.go:353] cluster config:
	{Name:addons-707536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-707536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:35.231909  742618 out.go:179] * Starting "addons-707536" primary control-plane node in "addons-707536" cluster
	I1229 07:09:35.233000  742618 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:09:35.234146  742618 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:09:35.235289  742618 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:09:35.235352  742618 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:09:35.235357  742618 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:09:35.235369  742618 cache.go:65] Caching tarball of preloaded images
	I1229 07:09:35.235454  742618 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:09:35.235465  742618 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:09:35.235820  742618 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/config.json ...
	I1229 07:09:35.235842  742618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/config.json: {Name:mkb761e318ecab0aabae41f0f541e9a3d23ed743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:09:35.251380  742618 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 to local cache
	I1229 07:09:35.251515  742618 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local cache directory
	I1229 07:09:35.251538  742618 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local cache directory, skipping pull
	I1229 07:09:35.251542  742618 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in cache, skipping pull
	I1229 07:09:35.251550  742618 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 as a tarball
	I1229 07:09:35.251557  742618 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 from local cache
	I1229 07:09:53.624809  742618 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 from cached tarball
	I1229 07:09:53.624850  742618 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:09:53.624891  742618 start.go:360] acquireMachinesLock for addons-707536: {Name:mka550db4f5fafe212832634cbceeebc4e454273 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:09:53.625613  742618 start.go:364] duration metric: took 685.406µs to acquireMachinesLock for "addons-707536"
	I1229 07:09:53.625654  742618 start.go:93] Provisioning new machine with config: &{Name:addons-707536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-707536 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:09:53.625728  742618 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:09:53.629226  742618 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1229 07:09:53.629475  742618 start.go:159] libmachine.API.Create for "addons-707536" (driver="docker")
	I1229 07:09:53.629514  742618 client.go:173] LocalClient.Create starting
	I1229 07:09:53.629620  742618 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 07:09:53.842245  742618 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 07:09:54.004053  742618 cli_runner.go:164] Run: docker network inspect addons-707536 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:09:54.020878  742618 cli_runner.go:211] docker network inspect addons-707536 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:09:54.020979  742618 network_create.go:284] running [docker network inspect addons-707536] to gather additional debugging logs...
	I1229 07:09:54.021001  742618 cli_runner.go:164] Run: docker network inspect addons-707536
	W1229 07:09:54.036852  742618 cli_runner.go:211] docker network inspect addons-707536 returned with exit code 1
	I1229 07:09:54.036887  742618 network_create.go:287] error running [docker network inspect addons-707536]: docker network inspect addons-707536: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-707536 not found
	I1229 07:09:54.036901  742618 network_create.go:289] output of [docker network inspect addons-707536]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-707536 not found
	
	** /stderr **
	I1229 07:09:54.037012  742618 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:09:54.053164  742618 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ad8f20}
	I1229 07:09:54.053205  742618 network_create.go:124] attempt to create docker network addons-707536 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1229 07:09:54.053269  742618 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-707536 addons-707536
	I1229 07:09:54.113105  742618 network_create.go:108] docker network addons-707536 192.168.49.0/24 created
	I1229 07:09:54.113141  742618 kic.go:121] calculated static IP "192.168.49.2" for the "addons-707536" container
	I1229 07:09:54.113218  742618 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:09:54.131712  742618 cli_runner.go:164] Run: docker volume create addons-707536 --label name.minikube.sigs.k8s.io=addons-707536 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:09:54.149869  742618 oci.go:103] Successfully created a docker volume addons-707536
	I1229 07:09:54.149955  742618 cli_runner.go:164] Run: docker run --rm --name addons-707536-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-707536 --entrypoint /usr/bin/test -v addons-707536:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:09:56.205909  742618 cli_runner.go:217] Completed: docker run --rm --name addons-707536-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-707536 --entrypoint /usr/bin/test -v addons-707536:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib: (2.05589891s)
	I1229 07:09:56.205945  742618 oci.go:107] Successfully prepared a docker volume addons-707536
	I1229 07:09:56.205995  742618 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:09:56.206009  742618 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:09:56.206076  742618 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-707536:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:10:00.077199  742618 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-707536:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.871055067s)
	I1229 07:10:00.077236  742618 kic.go:203] duration metric: took 3.871223905s to extract preloaded images to volume ...
	W1229 07:10:00.077407  742618 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:10:00.077535  742618 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:10:00.239397  742618 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-707536 --name addons-707536 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-707536 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-707536 --network addons-707536 --ip 192.168.49.2 --volume addons-707536:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:10:00.653898  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Running}}
	I1229 07:10:00.681545  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:00.710334  742618 cli_runner.go:164] Run: docker exec addons-707536 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:10:00.769221  742618 oci.go:144] the created container "addons-707536" has a running status.
	I1229 07:10:00.769250  742618 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa...
	I1229 07:10:00.892028  742618 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:10:00.912248  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:00.935663  742618 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:10:00.935683  742618 kic_runner.go:114] Args: [docker exec --privileged addons-707536 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:10:00.989419  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:01.013220  742618 machine.go:94] provisionDockerMachine start ...
	I1229 07:10:01.013309  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:01.042637  742618 main.go:144] libmachine: Using SSH client type: native
	I1229 07:10:01.042971  742618 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I1229 07:10:01.042981  742618 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:10:01.043756  742618 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58176->127.0.0.1:33533: read: connection reset by peer
	I1229 07:10:04.200546  742618 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-707536
	
	I1229 07:10:04.200612  742618 ubuntu.go:182] provisioning hostname "addons-707536"
	I1229 07:10:04.200703  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:04.218252  742618 main.go:144] libmachine: Using SSH client type: native
	I1229 07:10:04.218564  742618 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I1229 07:10:04.218591  742618 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-707536 && echo "addons-707536" | sudo tee /etc/hostname
	I1229 07:10:04.378801  742618 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-707536
	
	I1229 07:10:04.378881  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:04.398104  742618 main.go:144] libmachine: Using SSH client type: native
	I1229 07:10:04.398492  742618 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I1229 07:10:04.398513  742618 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-707536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-707536/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-707536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:10:04.549560  742618 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:10:04.549587  742618 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:10:04.549608  742618 ubuntu.go:190] setting up certificates
	I1229 07:10:04.549624  742618 provision.go:84] configureAuth start
	I1229 07:10:04.549715  742618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-707536
	I1229 07:10:04.567213  742618 provision.go:143] copyHostCerts
	I1229 07:10:04.567316  742618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:10:04.567496  742618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:10:04.567585  742618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:10:04.567668  742618 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.addons-707536 san=[127.0.0.1 192.168.49.2 addons-707536 localhost minikube]
	I1229 07:10:04.759559  742618 provision.go:177] copyRemoteCerts
	I1229 07:10:04.759637  742618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:10:04.759677  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:04.777813  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:04.885052  742618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:10:04.902824  742618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1229 07:10:04.920824  742618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:10:04.938541  742618 provision.go:87] duration metric: took 388.889559ms to configureAuth
	I1229 07:10:04.938572  742618 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:10:04.938805  742618 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:10:04.938919  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:04.955443  742618 main.go:144] libmachine: Using SSH client type: native
	I1229 07:10:04.955766  742618 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I1229 07:10:04.955780  742618 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:10:05.260842  742618 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:10:05.260869  742618 machine.go:97] duration metric: took 4.247628151s to provisionDockerMachine
	I1229 07:10:05.260880  742618 client.go:176] duration metric: took 11.631359681s to LocalClient.Create
	I1229 07:10:05.260894  742618 start.go:167] duration metric: took 11.631420515s to libmachine.API.Create "addons-707536"
	I1229 07:10:05.260921  742618 start.go:293] postStartSetup for "addons-707536" (driver="docker")
	I1229 07:10:05.260947  742618 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:10:05.261022  742618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:10:05.261067  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:05.280138  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:05.385285  742618 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:10:05.388778  742618 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:10:05.388836  742618 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:10:05.388848  742618 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:10:05.388918  742618 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:10:05.388952  742618 start.go:296] duration metric: took 128.019835ms for postStartSetup
	I1229 07:10:05.389283  742618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-707536
	I1229 07:10:05.406020  742618 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/config.json ...
	I1229 07:10:05.406320  742618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:10:05.406372  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:05.423135  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:05.525815  742618 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:10:05.530470  742618 start.go:128] duration metric: took 11.904727538s to createHost
	I1229 07:10:05.530497  742618 start.go:83] releasing machines lock for "addons-707536", held for 11.904863842s
	I1229 07:10:05.530566  742618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-707536
	I1229 07:10:05.547178  742618 ssh_runner.go:195] Run: cat /version.json
	I1229 07:10:05.547243  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:05.547514  742618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:10:05.547574  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:05.567577  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:05.578343  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:05.672635  742618 ssh_runner.go:195] Run: systemctl --version
	I1229 07:10:05.763931  742618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:10:05.798930  742618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:10:05.803387  742618 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:10:05.803459  742618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:10:05.834197  742618 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:10:05.834280  742618 start.go:496] detecting cgroup driver to use...
	I1229 07:10:05.834346  742618 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:10:05.834447  742618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:10:05.853926  742618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:10:05.867916  742618 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:10:05.868017  742618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:10:05.887232  742618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:10:05.906206  742618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:10:06.019244  742618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:10:06.146831  742618 docker.go:234] disabling docker service ...
	I1229 07:10:06.146903  742618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:10:06.170663  742618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:10:06.186307  742618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:10:06.307332  742618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:10:06.426715  742618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:10:06.440375  742618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:10:06.454253  742618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:10:06.454337  742618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:06.463184  742618 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:10:06.463300  742618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:06.472519  742618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:06.481282  742618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:06.490247  742618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:10:06.498858  742618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:06.508181  742618 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:06.522033  742618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:10:06.531187  742618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:10:06.539305  742618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:10:06.547683  742618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:10:06.664642  742618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:10:06.835520  742618 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:10:06.835603  742618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:10:06.839525  742618 start.go:574] Will wait 60s for crictl version
	I1229 07:10:06.839598  742618 ssh_runner.go:195] Run: which crictl
	I1229 07:10:06.843455  742618 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:10:06.870621  742618 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:10:06.870805  742618 ssh_runner.go:195] Run: crio --version
	I1229 07:10:06.899877  742618 ssh_runner.go:195] Run: crio --version
	I1229 07:10:06.935388  742618 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:10:06.938303  742618 cli_runner.go:164] Run: docker network inspect addons-707536 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:10:06.954313  742618 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1229 07:10:06.958136  742618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:10:06.967919  742618 kubeadm.go:884] updating cluster {Name:addons-707536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-707536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:10:06.968047  742618 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:10:06.968102  742618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:10:07.005130  742618 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:10:07.005156  742618 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:10:07.005219  742618 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:10:07.035378  742618 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:10:07.035403  742618 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:10:07.035411  742618 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1229 07:10:07.035512  742618 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-707536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-707536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:10:07.035601  742618 ssh_runner.go:195] Run: crio config
	I1229 07:10:07.098864  742618 cni.go:84] Creating CNI manager for ""
	I1229 07:10:07.098888  742618 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:10:07.098909  742618 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:10:07.098957  742618 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-707536 NodeName:addons-707536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:10:07.099138  742618 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-707536"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:10:07.099232  742618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:10:07.107413  742618 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:10:07.107543  742618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:10:07.115140  742618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1229 07:10:07.127870  742618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:10:07.141267  742618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1229 07:10:07.154057  742618 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:10:07.157683  742618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:10:07.167422  742618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:10:07.282962  742618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:10:07.298599  742618 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536 for IP: 192.168.49.2
	I1229 07:10:07.298622  742618 certs.go:195] generating shared ca certs ...
	I1229 07:10:07.298638  742618 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:07.298817  742618 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:10:07.412878  742618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt ...
	I1229 07:10:07.412910  742618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt: {Name:mka74c6f7eb008435bd9ff76f8dfa6146153f2ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:07.413137  742618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key ...
	I1229 07:10:07.413153  742618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key: {Name:mk92e7a84fa6b782a821ed9f0574274116d92579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:07.413265  742618 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:10:07.574800  742618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt ...
	I1229 07:10:07.574833  742618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt: {Name:mk68d0e8d032011c6ce84841e0dc15fc1ff230db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:07.575008  742618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key ...
	I1229 07:10:07.575021  742618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key: {Name:mk2dd8332a179dc452f81b098dae88dbe5a255d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:07.575114  742618 certs.go:257] generating profile certs ...
	I1229 07:10:07.575171  742618 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.key
	I1229 07:10:07.575189  742618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt with IP's: []
	I1229 07:10:07.780748  742618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt ...
	I1229 07:10:07.780797  742618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: {Name:mkfed65694ecbe938cc3268f1fb4549844516ea7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:07.780988  742618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.key ...
	I1229 07:10:07.781001  742618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.key: {Name:mk251ba589c0d0eecc679f4c30a4c5639b557916 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:07.781094  742618 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/apiserver.key.dc64b18f
	I1229 07:10:07.781116  742618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/apiserver.crt.dc64b18f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1229 07:10:08.014463  742618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/apiserver.crt.dc64b18f ...
	I1229 07:10:08.014497  742618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/apiserver.crt.dc64b18f: {Name:mk986558f668cd5573e6398d323c2ce718392ab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:08.014690  742618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/apiserver.key.dc64b18f ...
	I1229 07:10:08.014707  742618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/apiserver.key.dc64b18f: {Name:mkbb84d15db5370bcdfd73165ef5c6402c20019a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:08.015542  742618 certs.go:382] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/apiserver.crt.dc64b18f -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/apiserver.crt
	I1229 07:10:08.015636  742618 certs.go:386] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/apiserver.key.dc64b18f -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/apiserver.key
	I1229 07:10:08.015690  742618 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/proxy-client.key
	I1229 07:10:08.015715  742618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/proxy-client.crt with IP's: []
	I1229 07:10:08.314412  742618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/proxy-client.crt ...
	I1229 07:10:08.314446  742618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/proxy-client.crt: {Name:mk7b3c71fffcdcdc4583260f692ea730fd4a3c93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:08.314635  742618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/proxy-client.key ...
	I1229 07:10:08.314650  742618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/proxy-client.key: {Name:mk579374fd1f34d189777864386e82444d06820d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:08.314843  742618 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:10:08.314892  742618 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:10:08.314918  742618 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:10:08.314954  742618 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:10:08.315640  742618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:10:08.334208  742618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:10:08.352294  742618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:10:08.371262  742618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:10:08.389419  742618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1229 07:10:08.407499  742618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1229 07:10:08.425603  742618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:10:08.444013  742618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:10:08.461741  742618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:10:08.479857  742618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:10:08.494106  742618 ssh_runner.go:195] Run: openssl version
	I1229 07:10:08.500664  742618 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:10:08.508460  742618 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:10:08.516173  742618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:10:08.519979  742618 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:10:08.520094  742618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:10:08.561188  742618 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:10:08.568905  742618 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:10:08.576576  742618 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:10:08.580240  742618 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:10:08.580320  742618 kubeadm.go:401] StartCluster: {Name:addons-707536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-707536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:10:08.580420  742618 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:10:08.580483  742618 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:10:08.612308  742618 cri.go:96] found id: ""
	I1229 07:10:08.612382  742618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:10:08.619835  742618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:10:08.627336  742618 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:10:08.627424  742618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:10:08.634853  742618 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:10:08.634872  742618 kubeadm.go:158] found existing configuration files:
	
	I1229 07:10:08.634953  742618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:10:08.642436  742618 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:10:08.642503  742618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:10:08.649678  742618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:10:08.657471  742618 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:10:08.657648  742618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:10:08.665074  742618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:10:08.672522  742618 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:10:08.672589  742618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:10:08.679775  742618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:10:08.687357  742618 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:10:08.687451  742618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:10:08.694729  742618 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:10:08.801091  742618 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:10:08.801612  742618 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:10:08.866816  742618 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:10:21.300485  742618 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:10:21.300550  742618 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:10:21.300638  742618 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:10:21.300699  742618 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:10:21.300736  742618 kubeadm.go:319] OS: Linux
	I1229 07:10:21.300808  742618 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:10:21.300861  742618 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:10:21.300912  742618 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:10:21.300964  742618 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:10:21.301015  742618 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:10:21.301067  742618 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:10:21.301125  742618 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:10:21.301187  742618 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:10:21.301242  742618 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:10:21.301319  742618 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:10:21.301427  742618 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:10:21.301542  742618 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:10:21.301607  742618 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:10:21.304891  742618 out.go:252]   - Generating certificates and keys ...
	I1229 07:10:21.305010  742618 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:10:21.305089  742618 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:10:21.305162  742618 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:10:21.305225  742618 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:10:21.305290  742618 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:10:21.305345  742618 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:10:21.305403  742618 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:10:21.305525  742618 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-707536 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1229 07:10:21.305582  742618 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:10:21.305707  742618 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-707536 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1229 07:10:21.305777  742618 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:10:21.305845  742618 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:10:21.305894  742618 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:10:21.305953  742618 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:10:21.306008  742618 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:10:21.306069  742618 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:10:21.306127  742618 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:10:21.306194  742618 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:10:21.306252  742618 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:10:21.306338  742618 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:10:21.306409  742618 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:10:21.309304  742618 out.go:252]   - Booting up control plane ...
	I1229 07:10:21.309432  742618 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:10:21.309530  742618 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:10:21.309629  742618 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:10:21.309782  742618 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:10:21.309892  742618 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:10:21.310000  742618 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:10:21.310090  742618 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:10:21.310134  742618 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:10:21.310267  742618 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:10:21.310374  742618 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:10:21.310436  742618 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002783111s
	I1229 07:10:21.310531  742618 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 07:10:21.310621  742618 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1229 07:10:21.310714  742618 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 07:10:21.310800  742618 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 07:10:21.310879  742618 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.020696805s
	I1229 07:10:21.310949  742618 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.895657414s
	I1229 07:10:21.311019  742618 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001262907s
	I1229 07:10:21.311128  742618 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:10:21.311256  742618 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:10:21.311317  742618 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:10:21.311513  742618 kubeadm.go:319] [mark-control-plane] Marking the node addons-707536 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:10:21.311572  742618 kubeadm.go:319] [bootstrap-token] Using token: u43nlp.sh3zm5ha3d5iwrvn
	I1229 07:10:21.314772  742618 out.go:252]   - Configuring RBAC rules ...
	I1229 07:10:21.314915  742618 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:10:21.315013  742618 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:10:21.315178  742618 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:10:21.315311  742618 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:10:21.315431  742618 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:10:21.315521  742618 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:10:21.315639  742618 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:10:21.315688  742618 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:10:21.315738  742618 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:10:21.315745  742618 kubeadm.go:319] 
	I1229 07:10:21.315805  742618 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:10:21.315816  742618 kubeadm.go:319] 
	I1229 07:10:21.315894  742618 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:10:21.315902  742618 kubeadm.go:319] 
	I1229 07:10:21.315930  742618 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:10:21.315993  742618 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:10:21.316046  742618 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:10:21.316054  742618 kubeadm.go:319] 
	I1229 07:10:21.316108  742618 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:10:21.316115  742618 kubeadm.go:319] 
	I1229 07:10:21.316163  742618 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:10:21.316170  742618 kubeadm.go:319] 
	I1229 07:10:21.316223  742618 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:10:21.316301  742618 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:10:21.316374  742618 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:10:21.316381  742618 kubeadm.go:319] 
	I1229 07:10:21.316466  742618 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:10:21.316550  742618 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:10:21.316557  742618 kubeadm.go:319] 
	I1229 07:10:21.316642  742618 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token u43nlp.sh3zm5ha3d5iwrvn \
	I1229 07:10:21.316753  742618 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 \
	I1229 07:10:21.316808  742618 kubeadm.go:319] 	--control-plane 
	I1229 07:10:21.316815  742618 kubeadm.go:319] 
	I1229 07:10:21.316937  742618 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:10:21.316957  742618 kubeadm.go:319] 
	I1229 07:10:21.317051  742618 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token u43nlp.sh3zm5ha3d5iwrvn \
	I1229 07:10:21.317186  742618 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 
	I1229 07:10:21.317199  742618 cni.go:84] Creating CNI manager for ""
	I1229 07:10:21.317210  742618 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:10:21.320341  742618 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:10:21.323259  742618 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:10:21.327418  742618 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:10:21.327438  742618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:10:21.342256  742618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:10:21.651932  742618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:10:21.652064  742618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:10:21.652142  742618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-707536 minikube.k8s.io/updated_at=2025_12_29T07_10_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=addons-707536 minikube.k8s.io/primary=true
	I1229 07:10:21.666817  742618 ops.go:34] apiserver oom_adj: -16
	I1229 07:10:21.835734  742618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:10:22.336656  742618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:10:22.836172  742618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:10:23.336155  742618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:10:23.836426  742618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:10:24.336078  742618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:10:24.836522  742618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:10:25.336655  742618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:10:25.430144  742618 kubeadm.go:1114] duration metric: took 3.778123533s to wait for elevateKubeSystemPrivileges
	I1229 07:10:25.430174  742618 kubeadm.go:403] duration metric: took 16.84985814s to StartCluster
	I1229 07:10:25.430192  742618 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:25.430338  742618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:10:25.430777  742618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:10:25.431628  742618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:10:25.431656  742618 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:10:25.431918  742618 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:10:25.431956  742618 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1229 07:10:25.432035  742618 addons.go:70] Setting yakd=true in profile "addons-707536"
	I1229 07:10:25.432048  742618 addons.go:239] Setting addon yakd=true in "addons-707536"
	I1229 07:10:25.432072  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.432562  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.433133  742618 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-707536"
	I1229 07:10:25.433154  742618 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-707536"
	I1229 07:10:25.433179  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.433613  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.434029  742618 addons.go:70] Setting metrics-server=true in profile "addons-707536"
	I1229 07:10:25.434083  742618 addons.go:239] Setting addon metrics-server=true in "addons-707536"
	I1229 07:10:25.434122  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.434704  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.435124  742618 out.go:179] * Verifying Kubernetes components...
	I1229 07:10:25.437038  742618 addons.go:70] Setting cloud-spanner=true in profile "addons-707536"
	I1229 07:10:25.437108  742618 addons.go:239] Setting addon cloud-spanner=true in "addons-707536"
	I1229 07:10:25.437151  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.437760  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.438885  742618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:10:25.438905  742618 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-707536"
	I1229 07:10:25.439574  742618 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-707536"
	I1229 07:10:25.439605  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.440212  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.438913  742618 addons.go:70] Setting default-storageclass=true in profile "addons-707536"
	I1229 07:10:25.454534  742618 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-707536"
	I1229 07:10:25.454901  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.438919  742618 addons.go:70] Setting gcp-auth=true in profile "addons-707536"
	I1229 07:10:25.471174  742618 mustload.go:66] Loading cluster: addons-707536
	I1229 07:10:25.471498  742618 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:10:25.471854  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.438925  742618 addons.go:70] Setting ingress=true in profile "addons-707536"
	I1229 07:10:25.482505  742618 addons.go:239] Setting addon ingress=true in "addons-707536"
	I1229 07:10:25.482580  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.483125  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.438938  742618 addons.go:70] Setting ingress-dns=true in profile "addons-707536"
	I1229 07:10:25.492141  742618 addons.go:239] Setting addon ingress-dns=true in "addons-707536"
	I1229 07:10:25.496960  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.497457  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.438944  742618 addons.go:70] Setting inspektor-gadget=true in profile "addons-707536"
	I1229 07:10:25.439175  742618 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-707536"
	I1229 07:10:25.513014  742618 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-707536"
	I1229 07:10:25.513111  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.513606  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.439184  742618 addons.go:70] Setting registry=true in profile "addons-707536"
	I1229 07:10:25.525155  742618 addons.go:239] Setting addon registry=true in "addons-707536"
	I1229 07:10:25.536188  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.536708  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.439191  742618 addons.go:70] Setting registry-creds=true in profile "addons-707536"
	I1229 07:10:25.544566  742618 addons.go:239] Setting addon registry-creds=true in "addons-707536"
	I1229 07:10:25.545496  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.439203  742618 addons.go:70] Setting storage-provisioner=true in profile "addons-707536"
	I1229 07:10:25.578316  742618 addons.go:239] Setting addon storage-provisioner=true in "addons-707536"
	I1229 07:10:25.578401  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.439209  742618 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-707536"
	I1229 07:10:25.585897  742618 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-707536"
	I1229 07:10:25.586281  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.586535  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.439221  742618 addons.go:70] Setting volumesnapshots=true in profile "addons-707536"
	I1229 07:10:25.604631  742618 addons.go:239] Setting addon volumesnapshots=true in "addons-707536"
	I1229 07:10:25.604683  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.605200  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.504898  742618 addons.go:239] Setting addon inspektor-gadget=true in "addons-707536"
	I1229 07:10:25.611916  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.612499  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.623941  742618 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1229 07:10:25.631314  742618 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1229 07:10:25.632545  742618 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1229 07:10:25.543624  742618 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.7
	I1229 07:10:25.586801  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.439216  742618 addons.go:70] Setting volcano=true in profile "addons-707536"
	I1229 07:10:25.632567  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1229 07:10:25.633660  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.657901  742618 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1229 07:10:25.662453  742618 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1229 07:10:25.662481  742618 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1229 07:10:25.662549  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.665729  742618 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1229 07:10:25.672887  742618 addons.go:239] Setting addon volcano=true in "addons-707536"
	I1229 07:10:25.672948  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.673121  742618 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1229 07:10:25.673133  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1229 07:10:25.673191  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.699867  742618 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1229 07:10:25.703899  742618 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1229 07:10:25.704893  742618 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1229 07:10:25.704993  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.711689  742618 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1229 07:10:25.717679  742618 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1229 07:10:25.725541  742618 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1229 07:10:25.727749  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.735840  742618 addons.go:239] Setting addon default-storageclass=true in "addons-707536"
	I1229 07:10:25.735935  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.736487  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.751059  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.778806  742618 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1229 07:10:25.818899  742618 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1229 07:10:25.824886  742618 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1229 07:10:25.824911  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1229 07:10:25.824977  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.836936  742618 out.go:179]   - Using image docker.io/registry:3.0.0
	I1229 07:10:25.837136  742618 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1229 07:10:25.837502  742618 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1229 07:10:25.865059  742618 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1229 07:10:25.867960  742618 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1229 07:10:25.867985  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1229 07:10:25.868058  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.838816  742618 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-707536"
	I1229 07:10:25.871468  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:25.871973  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:25.889466  742618 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1229 07:10:25.892345  742618 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1229 07:10:25.892367  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1229 07:10:25.892437  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.901902  742618 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1229 07:10:25.902483  742618 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1229 07:10:25.903652  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1229 07:10:25.903713  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.905599  742618 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:10:25.908322  742618 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1229 07:10:25.908385  742618 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1229 07:10:25.911751  742618 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1229 07:10:25.912205  742618 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1229 07:10:25.912218  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1229 07:10:25.912294  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.912100  742618 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1229 07:10:25.912572  742618 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1229 07:10:25.912611  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.912119  742618 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1229 07:10:25.937005  742618 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1229 07:10:25.940133  742618 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1229 07:10:25.940161  742618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1229 07:10:25.940237  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.949201  742618 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1229 07:10:25.949224  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I1229 07:10:25.949286  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.967896  742618 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:10:25.967918  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:10:25.967979  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.972718  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:25.992739  742618 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:10:25.992760  742618 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:10:25.992861  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:25.993546  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	W1229 07:10:25.994350  742618 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1229 07:10:25.996163  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:26.031933  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:26.045190  742618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:10:26.045320  742618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:10:26.095718  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:26.125571  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:26.145634  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:26.154488  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:26.155032  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:26.156344  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:26.167492  742618 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1229 07:10:26.175101  742618 out.go:179]   - Using image docker.io/busybox:stable
	I1229 07:10:26.178030  742618 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1229 07:10:26.178055  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1229 07:10:26.178121  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:26.190952  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:26.192455  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:26.196686  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:26.206807  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:26.225181  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	W1229 07:10:26.229595  742618 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1229 07:10:26.229643  742618 retry.go:84] will retry after 300ms: ssh: handshake failed: EOF
	I1229 07:10:26.565232  742618 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1229 07:10:26.565269  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1229 07:10:26.716330  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1229 07:10:26.758663  742618 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1229 07:10:26.758688  742618 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1229 07:10:26.884951  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1229 07:10:26.929337  742618 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1229 07:10:26.929359  742618 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1229 07:10:26.992861  742618 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1229 07:10:26.992883  742618 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1229 07:10:27.016667  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1229 07:10:27.019493  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1229 07:10:27.043462  742618 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1229 07:10:27.043528  742618 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1229 07:10:27.070766  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1229 07:10:27.097214  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1229 07:10:27.127297  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1229 07:10:27.158253  742618 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1229 07:10:27.158326  742618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1229 07:10:27.163945  742618 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1229 07:10:27.164022  742618 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1229 07:10:27.298404  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:10:27.330333  742618 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1229 07:10:27.330399  742618 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1229 07:10:27.377999  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:10:27.389641  742618 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1229 07:10:27.389717  742618 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1229 07:10:27.396494  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1229 07:10:27.470184  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1229 07:10:27.516229  742618 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1229 07:10:27.516294  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1229 07:10:27.569750  742618 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1229 07:10:27.569817  742618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1229 07:10:27.612275  742618 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1229 07:10:27.612350  742618 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1229 07:10:27.667246  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1229 07:10:27.763368  742618 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1229 07:10:27.763441  742618 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1229 07:10:27.938261  742618 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1229 07:10:27.938349  742618 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1229 07:10:27.985813  742618 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1229 07:10:27.985890  742618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1229 07:10:28.069923  742618 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1229 07:10:28.069997  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I1229 07:10:28.342959  742618 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1229 07:10:28.343029  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1229 07:10:28.407198  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1229 07:10:28.410472  742618 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1229 07:10:28.410547  742618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1229 07:10:28.770808  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1229 07:10:28.883084  742618 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1229 07:10:28.883154  742618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1229 07:10:28.936571  742618 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1229 07:10:28.936647  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1229 07:10:28.952431  742618 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.907079408s)
	I1229 07:10:28.952514  742618 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.907302877s)
	I1229 07:10:28.952619  742618 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1229 07:10:28.953402  742618 node_ready.go:35] waiting up to 6m0s for node "addons-707536" to be "Ready" ...
	I1229 07:10:29.118853  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.402484734s)
	I1229 07:10:29.291140  742618 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1229 07:10:29.291206  742618 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1229 07:10:29.316257  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.431260156s)
	I1229 07:10:29.462243  742618 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-707536" context rescaled to 1 replicas
	I1229 07:10:29.550501  742618 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1229 07:10:29.550572  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1229 07:10:29.895535  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.878778092s)
	I1229 07:10:29.896771  742618 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1229 07:10:29.896849  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1229 07:10:30.177875  742618 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1229 07:10:30.177954  742618 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1229 07:10:30.418771  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1229 07:10:30.983357  742618 node_ready.go:57] node "addons-707536" has "Ready":"False" status (will retry)
	I1229 07:10:31.546397  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.526824374s)
	I1229 07:10:32.803138  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.732293258s)
	I1229 07:10:32.803215  742618 addons.go:495] Verifying addon ingress=true in "addons-707536"
	I1229 07:10:32.803418  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.706138228s)
	I1229 07:10:32.803576  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.676204985s)
	I1229 07:10:32.803686  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.505215308s)
	I1229 07:10:32.803727  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.425470417s)
	I1229 07:10:32.803770  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.407208904s)
	I1229 07:10:32.803906  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.333648426s)
	I1229 07:10:32.803940  742618 addons.go:495] Verifying addon metrics-server=true in "addons-707536"
	I1229 07:10:32.803995  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.136671761s)
	I1229 07:10:32.804027  742618 addons.go:495] Verifying addon registry=true in "addons-707536"
	I1229 07:10:32.804114  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.033224075s)
	W1229 07:10:32.804142  742618 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1229 07:10:32.804175  742618 retry.go:84] will retry after 200ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1229 07:10:32.804031  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.396751975s)
	I1229 07:10:32.806520  742618 out.go:179] * Verifying ingress addon...
	I1229 07:10:32.808486  742618 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-707536 service yakd-dashboard -n yakd-dashboard
	
	I1229 07:10:32.808570  742618 out.go:179] * Verifying registry addon...
	I1229 07:10:32.811148  742618 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1229 07:10:32.813160  742618 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1229 07:10:32.830031  742618 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1229 07:10:32.830051  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:32.830246  742618 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1229 07:10:32.830254  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1229 07:10:32.839383  742618 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1229 07:10:33.032624  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1229 07:10:33.218850  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.799992351s)
	I1229 07:10:33.218928  742618 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-707536"
	I1229 07:10:33.222985  742618 out.go:179] * Verifying csi-hostpath-driver addon...
	I1229 07:10:33.226113  742618 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1229 07:10:33.241715  742618 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1229 07:10:33.241736  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:33.340547  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:33.340852  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:33.441067  742618 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1229 07:10:33.441193  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	W1229 07:10:33.457922  742618 node_ready.go:57] node "addons-707536" has "Ready":"False" status (will retry)
	I1229 07:10:33.461000  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:33.573744  742618 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1229 07:10:33.586393  742618 addons.go:239] Setting addon gcp-auth=true in "addons-707536"
	I1229 07:10:33.586440  742618 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:10:33.586923  742618 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:10:33.604423  742618 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1229 07:10:33.604485  742618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:10:33.622062  742618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:10:33.730302  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:33.814076  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:33.816059  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:34.229354  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:34.314321  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:34.316368  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:34.729644  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:34.815330  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:34.816687  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:35.229773  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:35.314922  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:35.317035  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:35.729772  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:35.805235  742618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.772527853s)
	I1229 07:10:35.805375  742618 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.20092662s)
	I1229 07:10:35.808601  742618 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1229 07:10:35.811452  742618 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1229 07:10:35.814280  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:35.814341  742618 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1229 07:10:35.814367  742618 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1229 07:10:35.817335  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:35.828274  742618 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1229 07:10:35.828298  742618 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1229 07:10:35.841702  742618 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1229 07:10:35.841724  742618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1229 07:10:35.855775  742618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1229 07:10:35.956859  742618 node_ready.go:57] node "addons-707536" has "Ready":"False" status (will retry)
	I1229 07:10:36.231887  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:36.328283  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:36.335443  742618 addons.go:495] Verifying addon gcp-auth=true in "addons-707536"
	I1229 07:10:36.338433  742618 out.go:179] * Verifying gcp-auth addon...
	I1229 07:10:36.339297  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:36.342008  742618 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1229 07:10:36.346558  742618 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1229 07:10:36.346579  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:36.729211  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:36.814028  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:36.816432  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:36.845329  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:37.230273  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:37.314129  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:37.316383  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:37.345611  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:37.729898  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:37.815053  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:37.815472  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:37.845273  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1229 07:10:37.957003  742618 node_ready.go:57] node "addons-707536" has "Ready":"False" status (will retry)
	I1229 07:10:38.228987  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:38.315223  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:38.316075  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:38.344747  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:38.729556  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:38.814484  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:38.816607  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:38.845730  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:39.229661  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:39.314665  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:39.316978  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:39.345532  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:39.729355  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:39.814181  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:39.820125  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:39.845030  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:39.971170  742618 node_ready.go:49] node "addons-707536" is "Ready"
	I1229 07:10:39.971197  742618 node_ready.go:38] duration metric: took 11.017744897s for node "addons-707536" to be "Ready" ...
	I1229 07:10:39.971211  742618 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:10:39.971266  742618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:10:40.014389  742618 api_server.go:72] duration metric: took 14.582695112s to wait for apiserver process to appear ...
	I1229 07:10:40.014422  742618 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:10:40.014444  742618 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1229 07:10:40.073326  742618 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1229 07:10:40.076226  742618 api_server.go:141] control plane version: v1.35.0
	I1229 07:10:40.076322  742618 api_server.go:131] duration metric: took 61.880231ms to wait for apiserver health ...
	I1229 07:10:40.076345  742618 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:10:40.099094  742618 system_pods.go:59] 19 kube-system pods found
	I1229 07:10:40.099201  742618 system_pods.go:61] "coredns-7d764666f9-7c7bm" [10e3cbe3-6f9b-427e-85f3-616df0b081b9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:10:40.099226  742618 system_pods.go:61] "csi-hostpath-attacher-0" [89aa01df-ebfa-4d12-abec-5c2c25ae5d77] Pending
	I1229 07:10:40.099262  742618 system_pods.go:61] "csi-hostpath-resizer-0" [890f4d65-2571-4f8b-9fc7-035793a289c5] Pending
	I1229 07:10:40.099292  742618 system_pods.go:61] "csi-hostpathplugin-mtvgb" [0ad549b5-6415-497b-97b3-6e26dd41d934] Pending
	I1229 07:10:40.099315  742618 system_pods.go:61] "etcd-addons-707536" [45910cd2-80bc-4f72-b4b6-99ab6a7cfd4d] Running
	I1229 07:10:40.099350  742618 system_pods.go:61] "kindnet-d4k24" [4b09ffd3-9cbb-42bb-b493-648c9c521f5b] Running
	I1229 07:10:40.099374  742618 system_pods.go:61] "kube-apiserver-addons-707536" [29fe93bc-1bdf-4e9e-9858-3018a7de6dd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:10:40.099398  742618 system_pods.go:61] "kube-controller-manager-addons-707536" [e38a06dd-601a-4d49-bb3f-9adf4535b1ae] Running
	I1229 07:10:40.099437  742618 system_pods.go:61] "kube-ingress-dns-minikube" [6cce1782-c000-4864-825e-c60696544d9d] Pending
	I1229 07:10:40.099466  742618 system_pods.go:61] "kube-proxy-s2jhs" [33ad92ca-502f-47e5-94c7-b9385b343a4b] Running
	I1229 07:10:40.099491  742618 system_pods.go:61] "kube-scheduler-addons-707536" [da97ed3a-7a99-45b8-8398-06d3df6c82f3] Running
	I1229 07:10:40.099537  742618 system_pods.go:61] "metrics-server-5778bb4788-gv7xc" [96e7d547-d50c-4767-8c1c-482d040cb491] Pending
	I1229 07:10:40.099567  742618 system_pods.go:61] "nvidia-device-plugin-daemonset-pjdg8" [8049ef46-c6ec-4e47-9c0b-7e287d7c8874] Pending
	I1229 07:10:40.099601  742618 system_pods.go:61] "registry-788cd7d5bc-ts9v4" [7bccdd84-e8c8-430e-bfab-86209f56007d] Pending
	I1229 07:10:40.099620  742618 system_pods.go:61] "registry-creds-567fb78d95-dfv7q" [2c0dedbd-cbd5-49f7-8c8c-e83528ccad75] Pending
	I1229 07:10:40.099641  742618 system_pods.go:61] "registry-proxy-pshgp" [024cd990-58df-496e-9f8c-4d81503eb24e] Pending
	I1229 07:10:40.099681  742618 system_pods.go:61] "snapshot-controller-6588d87457-d2wxt" [e1e24c7b-75cb-4a53-8e91-82f5f0487925] Pending
	I1229 07:10:40.099703  742618 system_pods.go:61] "snapshot-controller-6588d87457-qwg2n" [8549795e-13d1-40ae-a4d4-19ab230225e8] Pending
	I1229 07:10:40.099725  742618 system_pods.go:61] "storage-provisioner" [f49ebf4f-a46c-496c-94a8-28f637fc37c3] Pending
	I1229 07:10:40.099758  742618 system_pods.go:74] duration metric: took 23.391705ms to wait for pod list to return data ...
	I1229 07:10:40.099786  742618 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:10:40.124419  742618 default_sa.go:45] found service account: "default"
	I1229 07:10:40.124501  742618 default_sa.go:55] duration metric: took 24.692042ms for default service account to be created ...
	I1229 07:10:40.124526  742618 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:10:40.199849  742618 system_pods.go:86] 19 kube-system pods found
	I1229 07:10:40.199945  742618 system_pods.go:89] "coredns-7d764666f9-7c7bm" [10e3cbe3-6f9b-427e-85f3-616df0b081b9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:10:40.199968  742618 system_pods.go:89] "csi-hostpath-attacher-0" [89aa01df-ebfa-4d12-abec-5c2c25ae5d77] Pending
	I1229 07:10:40.200001  742618 system_pods.go:89] "csi-hostpath-resizer-0" [890f4d65-2571-4f8b-9fc7-035793a289c5] Pending
	I1229 07:10:40.200029  742618 system_pods.go:89] "csi-hostpathplugin-mtvgb" [0ad549b5-6415-497b-97b3-6e26dd41d934] Pending
	I1229 07:10:40.200049  742618 system_pods.go:89] "etcd-addons-707536" [45910cd2-80bc-4f72-b4b6-99ab6a7cfd4d] Running
	I1229 07:10:40.200082  742618 system_pods.go:89] "kindnet-d4k24" [4b09ffd3-9cbb-42bb-b493-648c9c521f5b] Running
	I1229 07:10:40.200103  742618 system_pods.go:89] "kube-apiserver-addons-707536" [29fe93bc-1bdf-4e9e-9858-3018a7de6dd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:10:40.200134  742618 system_pods.go:89] "kube-controller-manager-addons-707536" [e38a06dd-601a-4d49-bb3f-9adf4535b1ae] Running
	I1229 07:10:40.200163  742618 system_pods.go:89] "kube-ingress-dns-minikube" [6cce1782-c000-4864-825e-c60696544d9d] Pending
	I1229 07:10:40.200182  742618 system_pods.go:89] "kube-proxy-s2jhs" [33ad92ca-502f-47e5-94c7-b9385b343a4b] Running
	I1229 07:10:40.200212  742618 system_pods.go:89] "kube-scheduler-addons-707536" [da97ed3a-7a99-45b8-8398-06d3df6c82f3] Running
	I1229 07:10:40.200243  742618 system_pods.go:89] "metrics-server-5778bb4788-gv7xc" [96e7d547-d50c-4767-8c1c-482d040cb491] Pending
	I1229 07:10:40.200262  742618 system_pods.go:89] "nvidia-device-plugin-daemonset-pjdg8" [8049ef46-c6ec-4e47-9c0b-7e287d7c8874] Pending
	I1229 07:10:40.200284  742618 system_pods.go:89] "registry-788cd7d5bc-ts9v4" [7bccdd84-e8c8-430e-bfab-86209f56007d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1229 07:10:40.200315  742618 system_pods.go:89] "registry-creds-567fb78d95-dfv7q" [2c0dedbd-cbd5-49f7-8c8c-e83528ccad75] Pending
	I1229 07:10:40.200335  742618 system_pods.go:89] "registry-proxy-pshgp" [024cd990-58df-496e-9f8c-4d81503eb24e] Pending
	I1229 07:10:40.200355  742618 system_pods.go:89] "snapshot-controller-6588d87457-d2wxt" [e1e24c7b-75cb-4a53-8e91-82f5f0487925] Pending
	I1229 07:10:40.200380  742618 system_pods.go:89] "snapshot-controller-6588d87457-qwg2n" [8549795e-13d1-40ae-a4d4-19ab230225e8] Pending
	I1229 07:10:40.200409  742618 system_pods.go:89] "storage-provisioner" [f49ebf4f-a46c-496c-94a8-28f637fc37c3] Pending
	I1229 07:10:40.200453  742618 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1229 07:10:40.303423  742618 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1229 07:10:40.303496  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:40.395846  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:40.395983  742618 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1229 07:10:40.396018  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:40.401439  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:40.442204  742618 system_pods.go:86] 19 kube-system pods found
	I1229 07:10:40.442923  742618 system_pods.go:89] "coredns-7d764666f9-7c7bm" [10e3cbe3-6f9b-427e-85f3-616df0b081b9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:10:40.442973  742618 system_pods.go:89] "csi-hostpath-attacher-0" [89aa01df-ebfa-4d12-abec-5c2c25ae5d77] Pending
	I1229 07:10:40.443005  742618 system_pods.go:89] "csi-hostpath-resizer-0" [890f4d65-2571-4f8b-9fc7-035793a289c5] Pending
	I1229 07:10:40.443028  742618 system_pods.go:89] "csi-hostpathplugin-mtvgb" [0ad549b5-6415-497b-97b3-6e26dd41d934] Pending
	I1229 07:10:40.443060  742618 system_pods.go:89] "etcd-addons-707536" [45910cd2-80bc-4f72-b4b6-99ab6a7cfd4d] Running
	I1229 07:10:40.443081  742618 system_pods.go:89] "kindnet-d4k24" [4b09ffd3-9cbb-42bb-b493-648c9c521f5b] Running
	I1229 07:10:40.443115  742618 system_pods.go:89] "kube-apiserver-addons-707536" [29fe93bc-1bdf-4e9e-9858-3018a7de6dd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:10:40.443147  742618 system_pods.go:89] "kube-controller-manager-addons-707536" [e38a06dd-601a-4d49-bb3f-9adf4535b1ae] Running
	I1229 07:10:40.443169  742618 system_pods.go:89] "kube-ingress-dns-minikube" [6cce1782-c000-4864-825e-c60696544d9d] Pending
	I1229 07:10:40.443192  742618 system_pods.go:89] "kube-proxy-s2jhs" [33ad92ca-502f-47e5-94c7-b9385b343a4b] Running
	I1229 07:10:40.443219  742618 system_pods.go:89] "kube-scheduler-addons-707536" [da97ed3a-7a99-45b8-8398-06d3df6c82f3] Running
	I1229 07:10:40.443255  742618 system_pods.go:89] "metrics-server-5778bb4788-gv7xc" [96e7d547-d50c-4767-8c1c-482d040cb491] Pending
	I1229 07:10:40.443276  742618 system_pods.go:89] "nvidia-device-plugin-daemonset-pjdg8" [8049ef46-c6ec-4e47-9c0b-7e287d7c8874] Pending
	I1229 07:10:40.443300  742618 system_pods.go:89] "registry-788cd7d5bc-ts9v4" [7bccdd84-e8c8-430e-bfab-86209f56007d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1229 07:10:40.443333  742618 system_pods.go:89] "registry-creds-567fb78d95-dfv7q" [2c0dedbd-cbd5-49f7-8c8c-e83528ccad75] Pending
	I1229 07:10:40.443358  742618 system_pods.go:89] "registry-proxy-pshgp" [024cd990-58df-496e-9f8c-4d81503eb24e] Pending
	I1229 07:10:40.443379  742618 system_pods.go:89] "snapshot-controller-6588d87457-d2wxt" [e1e24c7b-75cb-4a53-8e91-82f5f0487925] Pending
	I1229 07:10:40.443407  742618 system_pods.go:89] "snapshot-controller-6588d87457-qwg2n" [8549795e-13d1-40ae-a4d4-19ab230225e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 07:10:40.443439  742618 system_pods.go:89] "storage-provisioner" [f49ebf4f-a46c-496c-94a8-28f637fc37c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:10:40.744419  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:40.822319  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:40.827699  742618 system_pods.go:86] 19 kube-system pods found
	I1229 07:10:40.827794  742618 system_pods.go:89] "coredns-7d764666f9-7c7bm" [10e3cbe3-6f9b-427e-85f3-616df0b081b9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:10:40.827820  742618 system_pods.go:89] "csi-hostpath-attacher-0" [89aa01df-ebfa-4d12-abec-5c2c25ae5d77] Pending
	I1229 07:10:40.827859  742618 system_pods.go:89] "csi-hostpath-resizer-0" [890f4d65-2571-4f8b-9fc7-035793a289c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1229 07:10:40.827890  742618 system_pods.go:89] "csi-hostpathplugin-mtvgb" [0ad549b5-6415-497b-97b3-6e26dd41d934] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1229 07:10:40.827913  742618 system_pods.go:89] "etcd-addons-707536" [45910cd2-80bc-4f72-b4b6-99ab6a7cfd4d] Running
	I1229 07:10:40.827943  742618 system_pods.go:89] "kindnet-d4k24" [4b09ffd3-9cbb-42bb-b493-648c9c521f5b] Running
	I1229 07:10:40.827972  742618 system_pods.go:89] "kube-apiserver-addons-707536" [29fe93bc-1bdf-4e9e-9858-3018a7de6dd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:10:40.827996  742618 system_pods.go:89] "kube-controller-manager-addons-707536" [e38a06dd-601a-4d49-bb3f-9adf4535b1ae] Running
	I1229 07:10:40.828032  742618 system_pods.go:89] "kube-ingress-dns-minikube" [6cce1782-c000-4864-825e-c60696544d9d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1229 07:10:40.828060  742618 system_pods.go:89] "kube-proxy-s2jhs" [33ad92ca-502f-47e5-94c7-b9385b343a4b] Running
	I1229 07:10:40.828084  742618 system_pods.go:89] "kube-scheduler-addons-707536" [da97ed3a-7a99-45b8-8398-06d3df6c82f3] Running
	I1229 07:10:40.828125  742618 system_pods.go:89] "metrics-server-5778bb4788-gv7xc" [96e7d547-d50c-4767-8c1c-482d040cb491] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1229 07:10:40.828156  742618 system_pods.go:89] "nvidia-device-plugin-daemonset-pjdg8" [8049ef46-c6ec-4e47-9c0b-7e287d7c8874] Pending
	I1229 07:10:40.828182  742618 system_pods.go:89] "registry-788cd7d5bc-ts9v4" [7bccdd84-e8c8-430e-bfab-86209f56007d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1229 07:10:40.828214  742618 system_pods.go:89] "registry-creds-567fb78d95-dfv7q" [2c0dedbd-cbd5-49f7-8c8c-e83528ccad75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1229 07:10:40.828253  742618 system_pods.go:89] "registry-proxy-pshgp" [024cd990-58df-496e-9f8c-4d81503eb24e] Pending
	I1229 07:10:40.828276  742618 system_pods.go:89] "snapshot-controller-6588d87457-d2wxt" [e1e24c7b-75cb-4a53-8e91-82f5f0487925] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 07:10:40.828359  742618 system_pods.go:89] "snapshot-controller-6588d87457-qwg2n" [8549795e-13d1-40ae-a4d4-19ab230225e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 07:10:40.828398  742618 system_pods.go:89] "storage-provisioner" [f49ebf4f-a46c-496c-94a8-28f637fc37c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:10:40.833120  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:40.846903  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:41.218026  742618 system_pods.go:86] 19 kube-system pods found
	I1229 07:10:41.218109  742618 system_pods.go:89] "coredns-7d764666f9-7c7bm" [10e3cbe3-6f9b-427e-85f3-616df0b081b9] Running
	I1229 07:10:41.218135  742618 system_pods.go:89] "csi-hostpath-attacher-0" [89aa01df-ebfa-4d12-abec-5c2c25ae5d77] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1229 07:10:41.218159  742618 system_pods.go:89] "csi-hostpath-resizer-0" [890f4d65-2571-4f8b-9fc7-035793a289c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1229 07:10:41.218200  742618 system_pods.go:89] "csi-hostpathplugin-mtvgb" [0ad549b5-6415-497b-97b3-6e26dd41d934] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1229 07:10:41.218222  742618 system_pods.go:89] "etcd-addons-707536" [45910cd2-80bc-4f72-b4b6-99ab6a7cfd4d] Running
	I1229 07:10:41.218254  742618 system_pods.go:89] "kindnet-d4k24" [4b09ffd3-9cbb-42bb-b493-648c9c521f5b] Running
	I1229 07:10:41.218282  742618 system_pods.go:89] "kube-apiserver-addons-707536" [29fe93bc-1bdf-4e9e-9858-3018a7de6dd0] Running
	I1229 07:10:41.218303  742618 system_pods.go:89] "kube-controller-manager-addons-707536" [e38a06dd-601a-4d49-bb3f-9adf4535b1ae] Running
	I1229 07:10:41.218343  742618 system_pods.go:89] "kube-ingress-dns-minikube" [6cce1782-c000-4864-825e-c60696544d9d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1229 07:10:41.218371  742618 system_pods.go:89] "kube-proxy-s2jhs" [33ad92ca-502f-47e5-94c7-b9385b343a4b] Running
	I1229 07:10:41.218395  742618 system_pods.go:89] "kube-scheduler-addons-707536" [da97ed3a-7a99-45b8-8398-06d3df6c82f3] Running
	I1229 07:10:41.218428  742618 system_pods.go:89] "metrics-server-5778bb4788-gv7xc" [96e7d547-d50c-4767-8c1c-482d040cb491] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1229 07:10:41.218457  742618 system_pods.go:89] "nvidia-device-plugin-daemonset-pjdg8" [8049ef46-c6ec-4e47-9c0b-7e287d7c8874] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1229 07:10:41.218481  742618 system_pods.go:89] "registry-788cd7d5bc-ts9v4" [7bccdd84-e8c8-430e-bfab-86209f56007d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1229 07:10:41.218515  742618 system_pods.go:89] "registry-creds-567fb78d95-dfv7q" [2c0dedbd-cbd5-49f7-8c8c-e83528ccad75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1229 07:10:41.218544  742618 system_pods.go:89] "registry-proxy-pshgp" [024cd990-58df-496e-9f8c-4d81503eb24e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1229 07:10:41.218589  742618 system_pods.go:89] "snapshot-controller-6588d87457-d2wxt" [e1e24c7b-75cb-4a53-8e91-82f5f0487925] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 07:10:41.218619  742618 system_pods.go:89] "snapshot-controller-6588d87457-qwg2n" [8549795e-13d1-40ae-a4d4-19ab230225e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1229 07:10:41.218638  742618 system_pods.go:89] "storage-provisioner" [f49ebf4f-a46c-496c-94a8-28f637fc37c3] Running
	I1229 07:10:41.218670  742618 system_pods.go:126] duration metric: took 1.094120902s to wait for k8s-apps to be running ...
	I1229 07:10:41.218698  742618 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:10:41.218783  742618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:10:41.231824  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:41.275123  742618 system_svc.go:56] duration metric: took 56.412773ms WaitForService to wait for kubelet
	I1229 07:10:41.275205  742618 kubeadm.go:587] duration metric: took 15.843518701s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:10:41.275243  742618 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:10:41.278992  742618 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:10:41.279070  742618 node_conditions.go:123] node cpu capacity is 2
	I1229 07:10:41.279098  742618 node_conditions.go:105] duration metric: took 3.831833ms to run NodePressure ...
	I1229 07:10:41.279125  742618 start.go:242] waiting for startup goroutines ...
	I1229 07:10:41.316718  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:41.316945  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:41.346587  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:41.730587  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:41.832228  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:41.832549  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:41.845636  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:42.230842  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:42.317594  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:42.319153  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:42.345501  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:42.731029  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:42.832299  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:42.832628  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:42.845553  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:43.230347  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:43.314762  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:43.317480  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:43.345628  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:43.730102  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:43.831313  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:43.831557  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:43.845204  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:44.229758  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:44.316102  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:44.317594  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:44.345266  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:44.729956  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:44.831712  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:44.832348  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:44.847316  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:45.233078  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:45.314646  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:45.317291  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:45.345297  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:45.730509  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:45.815784  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:45.817712  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:45.845935  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:46.229667  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:46.321226  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:46.321353  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:46.345235  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:46.729852  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:46.817109  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:46.818236  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:46.845748  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:47.230328  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:47.314624  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:47.316646  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:47.345246  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:47.730991  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:47.831595  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:47.831962  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:47.844903  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:48.230123  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:48.314428  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:48.316979  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:48.345960  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:48.730347  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:48.831481  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:48.831975  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:48.845488  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:49.230660  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:49.316848  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:49.322802  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:49.347869  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:49.730138  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:49.816470  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:49.818767  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:49.847828  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:50.230238  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:50.316731  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:50.318627  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:50.346086  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:50.729489  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:50.814530  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:50.816375  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:50.845809  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:51.229112  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:51.314915  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:51.316237  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:51.345450  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:51.731302  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:51.816723  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:51.818885  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:51.846139  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:52.230364  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:52.321788  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:52.340593  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:52.429684  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:52.730018  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:52.835415  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:52.835643  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:52.846810  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:53.231969  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:53.319334  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:53.319568  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:53.345352  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:53.734447  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:53.815857  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:53.821923  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:53.846021  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:54.234597  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:54.322834  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:54.324062  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:54.347651  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:54.730787  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:54.816903  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:54.818411  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:54.845801  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:55.230233  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:55.315016  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:55.317928  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:55.346681  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:55.730968  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:55.817929  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:55.818475  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:55.845681  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:56.230266  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:56.314547  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:56.316350  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:56.345357  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:56.730240  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:56.817886  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:56.818147  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:56.845256  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:57.230361  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:57.314745  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:57.316716  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:57.353615  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:57.731457  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:57.815012  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:57.817211  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:57.845292  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:58.230266  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:58.315535  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:58.317219  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:58.345917  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:58.729794  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:58.815974  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:58.817520  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:58.847190  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:59.230152  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:59.315732  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:59.316547  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:59.348896  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:10:59.730504  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:10:59.816379  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:10:59.818862  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:10:59.845634  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:00.245134  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:00.317655  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:00.355137  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:00.357353  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:00.730512  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:00.814652  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:00.817236  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:00.845072  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:01.230106  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:01.315278  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:01.317236  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:01.344983  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:01.729795  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:01.815194  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:01.817628  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:01.846651  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:02.230139  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:02.331632  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:02.331699  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:02.345588  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:02.730861  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:02.814702  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:02.816972  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:02.845850  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:03.230475  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:03.315861  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:03.317508  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:03.353838  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:03.731418  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:03.815172  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:03.817779  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:03.845978  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:04.229609  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:04.322888  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:04.323158  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:04.350211  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:04.730013  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:04.816637  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:04.817875  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:04.846180  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:05.234022  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:05.320455  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:05.320710  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:05.346290  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:05.729879  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:05.815649  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:05.816914  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:05.845200  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:06.229906  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:06.315062  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:06.317574  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:06.345717  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:06.731027  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:06.815008  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:06.817114  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:06.845078  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:07.230683  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:07.315709  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:07.316890  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:07.345953  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:07.730261  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:07.814587  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:07.817001  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:07.845161  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:08.229807  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:08.317093  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:08.317550  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:08.345574  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:08.730737  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:08.815092  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:08.817570  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:08.845637  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:09.231000  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:09.316849  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:09.317088  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:09.345099  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:09.729938  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:09.816877  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:09.818568  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:09.845766  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:10.229737  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:10.330048  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:10.330201  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:10.345187  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:10.730363  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:10.815666  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:10.818051  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:10.846689  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:11.230395  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:11.314683  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:11.316722  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:11.346418  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:11.730696  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:11.815324  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:11.816476  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:11.845863  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:12.229768  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:12.319384  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:12.319788  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:12.346265  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:12.730345  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:12.817980  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:12.833827  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:12.846126  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:13.230960  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:13.330658  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:13.330835  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:13.345885  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:13.730486  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:13.815228  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:13.818252  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:13.845081  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:14.231255  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:14.320975  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:14.321551  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:14.345692  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:14.730492  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:14.814468  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:14.816364  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:14.845421  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:15.230319  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:15.331487  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:15.332085  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:15.346154  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:15.729522  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:15.815991  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:15.817674  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:15.845767  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:16.230636  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:16.314903  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:16.317041  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:16.344673  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:16.730282  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:16.815332  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:16.815976  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:16.845602  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:17.230439  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:17.314480  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:17.316507  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:17.345606  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:17.732219  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:17.817722  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:17.818073  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:17.845633  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:18.231861  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:18.317192  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:18.317544  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:18.345998  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:18.730781  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:18.817918  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:18.818685  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:18.846195  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:19.229973  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:19.315449  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:19.317826  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:19.346055  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:19.729634  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:19.814668  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:19.817342  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:19.845217  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:20.230802  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:20.317499  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:20.317497  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1229 07:11:20.345631  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:20.730629  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:20.815580  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:20.819047  742618 kapi.go:107] duration metric: took 48.005886168s to wait for kubernetes.io/minikube-addons=registry ...
	I1229 07:11:20.845402  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:21.230775  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:21.314837  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:21.346131  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:21.729845  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:21.815117  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:21.845725  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:22.230113  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:22.315911  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:22.345797  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:22.737702  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:22.815102  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:22.845010  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:23.233245  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:23.328745  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:23.354310  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:23.730929  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:23.816654  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:23.850164  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:24.230196  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:24.314829  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:24.349368  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:24.734907  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:24.816566  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:24.846679  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:25.231712  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:25.319319  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:25.349948  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:25.737421  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:25.814690  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:25.846578  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:26.230759  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:26.315431  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:26.346039  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:26.729607  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:26.815136  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:26.845264  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:27.230374  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:27.314785  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:27.346033  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:27.730057  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:27.815734  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:27.846142  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:28.229571  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:28.314569  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:28.345530  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:28.749513  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:28.815475  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:28.845118  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:29.230150  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:29.314049  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:29.345399  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:29.730555  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:29.814896  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:29.845940  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:30.229749  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:30.316906  742618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1229 07:11:30.346149  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:30.730991  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:30.815352  742618 kapi.go:107] duration metric: took 58.004202695s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1229 07:11:30.845609  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:31.231918  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:31.345699  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:31.731792  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:31.853937  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:32.229801  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:32.345869  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:32.731670  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:32.846201  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:33.229988  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:33.345070  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1229 07:11:33.730784  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:33.845665  742618 kapi.go:107] duration metric: took 57.503654914s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1229 07:11:33.851299  742618 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-707536 cluster.
	I1229 07:11:33.854984  742618 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1229 07:11:33.858633  742618 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1229 07:11:34.230508  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:34.730297  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:35.230350  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:35.729730  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:36.231697  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:36.733023  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:37.229603  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:37.730598  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:38.231596  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:38.730761  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:39.231056  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:39.729863  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:40.230254  742618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1229 07:11:40.729398  742618 kapi.go:107] duration metric: took 1m7.503287095s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1229 07:11:40.732561  742618 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, inspektor-gadget, registry-creds, ingress-dns, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1229 07:11:40.735459  742618 addons.go:530] duration metric: took 1m15.303499081s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin cloud-spanner inspektor-gadget registry-creds ingress-dns storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1229 07:11:40.735516  742618 start.go:247] waiting for cluster config update ...
	I1229 07:11:40.735538  742618 start.go:256] writing updated cluster config ...
	I1229 07:11:40.735824  742618 ssh_runner.go:195] Run: rm -f paused
	I1229 07:11:40.741545  742618 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:11:40.745646  742618 pod_ready.go:83] waiting for pod "coredns-7d764666f9-7c7bm" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:11:40.750678  742618 pod_ready.go:94] pod "coredns-7d764666f9-7c7bm" is "Ready"
	I1229 07:11:40.750702  742618 pod_ready.go:86] duration metric: took 5.033059ms for pod "coredns-7d764666f9-7c7bm" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:11:40.753444  742618 pod_ready.go:83] waiting for pod "etcd-addons-707536" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:11:40.758189  742618 pod_ready.go:94] pod "etcd-addons-707536" is "Ready"
	I1229 07:11:40.758213  742618 pod_ready.go:86] duration metric: took 4.745992ms for pod "etcd-addons-707536" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:11:40.760841  742618 pod_ready.go:83] waiting for pod "kube-apiserver-addons-707536" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:11:40.766281  742618 pod_ready.go:94] pod "kube-apiserver-addons-707536" is "Ready"
	I1229 07:11:40.766304  742618 pod_ready.go:86] duration metric: took 5.4424ms for pod "kube-apiserver-addons-707536" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:11:40.768902  742618 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-707536" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:11:41.147900  742618 pod_ready.go:94] pod "kube-controller-manager-addons-707536" is "Ready"
	I1229 07:11:41.147930  742618 pod_ready.go:86] duration metric: took 379.005719ms for pod "kube-controller-manager-addons-707536" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:11:41.346544  742618 pod_ready.go:83] waiting for pod "kube-proxy-s2jhs" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:11:41.745951  742618 pod_ready.go:94] pod "kube-proxy-s2jhs" is "Ready"
	I1229 07:11:41.745980  742618 pod_ready.go:86] duration metric: took 399.360841ms for pod "kube-proxy-s2jhs" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:11:41.944959  742618 pod_ready.go:83] waiting for pod "kube-scheduler-addons-707536" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:11:42.345870  742618 pod_ready.go:94] pod "kube-scheduler-addons-707536" is "Ready"
	I1229 07:11:42.345972  742618 pod_ready.go:86] duration metric: took 400.964032ms for pod "kube-scheduler-addons-707536" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:11:42.346006  742618 pod_ready.go:40] duration metric: took 1.604430377s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:11:42.400927  742618 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1229 07:11:42.404166  742618 out.go:203] 
	W1229 07:11:42.407099  742618 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1229 07:11:42.409954  742618 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:11:42.412825  742618 out.go:179] * Done! kubectl is now configured to use "addons-707536" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 29 07:12:24 addons-707536 crio[830]: time="2025-12-29T07:12:24.127314736Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-9b09faec-d460-4016-a891-cede84331133 Namespace:local-path-storage ID:44748fd73c728be8fa81c22f62e5effeab3b1eb47f7cee7bd7550eb9eadd174a UID:12036192-c344-413d-bcba-d2498e954c3a NetNS:/var/run/netns/db6c16a8-860a-4b54-927a-8d0639bad23d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004aad18}] Aliases:map[]}"
	Dec 29 07:12:24 addons-707536 crio[830]: time="2025-12-29T07:12:24.127492149Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-9b09faec-d460-4016-a891-cede84331133 for CNI network kindnet (type=ptp)"
	Dec 29 07:12:24 addons-707536 crio[830]: time="2025-12-29T07:12:24.135521627Z" level=info msg="Ran pod sandbox 44748fd73c728be8fa81c22f62e5effeab3b1eb47f7cee7bd7550eb9eadd174a with infra container: local-path-storage/helper-pod-delete-pvc-9b09faec-d460-4016-a891-cede84331133/POD" id=5c2b9814-e9b4-4a9b-b89b-195ed8fcfca7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:12:24 addons-707536 crio[830]: time="2025-12-29T07:12:24.139869765Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=f06447ea-db81-478f-a63c-e9207dc60ad6 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:12:24 addons-707536 crio[830]: time="2025-12-29T07:12:24.143342406Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=41dadff1-aa40-4881-a24b-c725ac6c431c name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:12:24 addons-707536 crio[830]: time="2025-12-29T07:12:24.153162794Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-9b09faec-d460-4016-a891-cede84331133/helper-pod" id=2fe6a30c-92a6-4700-a4f1-91a12309531e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:12:24 addons-707536 crio[830]: time="2025-12-29T07:12:24.153312991Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:12:24 addons-707536 crio[830]: time="2025-12-29T07:12:24.177914382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:12:24 addons-707536 crio[830]: time="2025-12-29T07:12:24.180168927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:12:24 addons-707536 crio[830]: time="2025-12-29T07:12:24.209668861Z" level=info msg="Created container 3aa4ad20e3b7185e0eab5cd86879a63ab7a5fbb902b8845d4c223fb5c8087ddb: local-path-storage/helper-pod-delete-pvc-9b09faec-d460-4016-a891-cede84331133/helper-pod" id=2fe6a30c-92a6-4700-a4f1-91a12309531e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:12:24 addons-707536 crio[830]: time="2025-12-29T07:12:24.210712646Z" level=info msg="Starting container: 3aa4ad20e3b7185e0eab5cd86879a63ab7a5fbb902b8845d4c223fb5c8087ddb" id=54899c24-b0bb-4eb2-817c-9f84c5b1842c name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:12:24 addons-707536 crio[830]: time="2025-12-29T07:12:24.212512286Z" level=info msg="Started container" PID=5807 containerID=3aa4ad20e3b7185e0eab5cd86879a63ab7a5fbb902b8845d4c223fb5c8087ddb description=local-path-storage/helper-pod-delete-pvc-9b09faec-d460-4016-a891-cede84331133/helper-pod id=54899c24-b0bb-4eb2-817c-9f84c5b1842c name=/runtime.v1.RuntimeService/StartContainer sandboxID=44748fd73c728be8fa81c22f62e5effeab3b1eb47f7cee7bd7550eb9eadd174a
	Dec 29 07:12:25 addons-707536 crio[830]: time="2025-12-29T07:12:25.185159413Z" level=info msg="Stopping container: c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771 (timeout: 30s)" id=801deeb6-d5f0-4ac0-b8cd-221a2d6e4a01 name=/runtime.v1.RuntimeService/StopContainer
	Dec 29 07:12:25 addons-707536 crio[830]: time="2025-12-29T07:12:25.299264874Z" level=info msg="Stopped container c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771: default/task-pv-pod/task-pv-container" id=801deeb6-d5f0-4ac0-b8cd-221a2d6e4a01 name=/runtime.v1.RuntimeService/StopContainer
	Dec 29 07:12:25 addons-707536 crio[830]: time="2025-12-29T07:12:25.299950756Z" level=info msg="Stopping pod sandbox: 95f47ca560bc40579dab98bd4513a5edbdbd33a1e42382a33d6e7ffc7677b019" id=d058cc6e-b444-478b-8939-6bff1306d449 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 29 07:12:25 addons-707536 crio[830]: time="2025-12-29T07:12:25.300240966Z" level=info msg="Got pod network &{Name:task-pv-pod Namespace:default ID:95f47ca560bc40579dab98bd4513a5edbdbd33a1e42382a33d6e7ffc7677b019 UID:21614047-7518-4bd4-b89e-f857f65461b3 NetNS:/var/run/netns/78e59c93-c3ea-4034-b543-95754aedbb14 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000064ab8}] Aliases:map[]}"
	Dec 29 07:12:25 addons-707536 crio[830]: time="2025-12-29T07:12:25.300383367Z" level=info msg="Deleting pod default_task-pv-pod from CNI network \"kindnet\" (type=ptp)"
	Dec 29 07:12:25 addons-707536 crio[830]: time="2025-12-29T07:12:25.341807505Z" level=info msg="Stopped pod sandbox: 95f47ca560bc40579dab98bd4513a5edbdbd33a1e42382a33d6e7ffc7677b019" id=d058cc6e-b444-478b-8939-6bff1306d449 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 29 07:12:25 addons-707536 crio[830]: time="2025-12-29T07:12:25.834919049Z" level=info msg="Stopping pod sandbox: 44748fd73c728be8fa81c22f62e5effeab3b1eb47f7cee7bd7550eb9eadd174a" id=e1875b4a-34b4-4548-8216-0f3540e55de4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 29 07:12:25 addons-707536 crio[830]: time="2025-12-29T07:12:25.835190921Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-9b09faec-d460-4016-a891-cede84331133 Namespace:local-path-storage ID:44748fd73c728be8fa81c22f62e5effeab3b1eb47f7cee7bd7550eb9eadd174a UID:12036192-c344-413d-bcba-d2498e954c3a NetNS:/var/run/netns/db6c16a8-860a-4b54-927a-8d0639bad23d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004ab880}] Aliases:map[]}"
	Dec 29 07:12:25 addons-707536 crio[830]: time="2025-12-29T07:12:25.835341265Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-9b09faec-d460-4016-a891-cede84331133 from CNI network \"kindnet\" (type=ptp)"
	Dec 29 07:12:25 addons-707536 crio[830]: time="2025-12-29T07:12:25.836664568Z" level=info msg="Removing container: c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771" id=95f797b5-7ed5-4794-a918-29e9bd421e1e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:12:25 addons-707536 crio[830]: time="2025-12-29T07:12:25.840214486Z" level=info msg="Error loading conmon cgroup of container c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771: cgroup deleted" id=95f797b5-7ed5-4794-a918-29e9bd421e1e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:12:25 addons-707536 crio[830]: time="2025-12-29T07:12:25.845394599Z" level=info msg="Removed container c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771: default/task-pv-pod/task-pv-container" id=95f797b5-7ed5-4794-a918-29e9bd421e1e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:12:25 addons-707536 crio[830]: time="2025-12-29T07:12:25.90106901Z" level=info msg="Stopped pod sandbox: 44748fd73c728be8fa81c22f62e5effeab3b1eb47f7cee7bd7550eb9eadd174a" id=e1875b4a-34b4-4548-8216-0f3540e55de4 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	3aa4ad20e3b71       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             7 seconds ago        Exited              helper-pod                               0                   44748fd73c728       helper-pod-delete-pvc-9b09faec-d460-4016-a891-cede84331133   local-path-storage
	7675d9077f06b       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            10 seconds ago       Exited              busybox                                  0                   63e8fdeeab88a       test-local-path                                              default
	207dba9e975b9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          46 seconds ago       Running             busybox                                  0                   68c0fcee9ee00       busybox                                                      default
	a8c420d116959       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          52 seconds ago       Running             csi-snapshotter                          0                   f43e236a4d560       csi-hostpathplugin-mtvgb                                     kube-system
	e02d944b39737       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          53 seconds ago       Running             csi-provisioner                          0                   f43e236a4d560       csi-hostpathplugin-mtvgb                                     kube-system
	9c92f38ed26e2       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            55 seconds ago       Running             liveness-probe                           0                   f43e236a4d560       csi-hostpathplugin-mtvgb                                     kube-system
	6cac5ee1eec0e       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           56 seconds ago       Running             hostpath                                 0                   f43e236a4d560       csi-hostpathplugin-mtvgb                                     kube-system
	1514517a2064e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                57 seconds ago       Running             node-driver-registrar                    0                   f43e236a4d560       csi-hostpathplugin-mtvgb                                     kube-system
	684b190b8e658       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 58 seconds ago       Running             gcp-auth                                 0                   6f554fe6a4013       gcp-auth-5bbcf684b5-r6znv                                    gcp-auth
	54373d8b70e88       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             About a minute ago   Running             controller                               0                   c2f75d6c57764       ingress-nginx-controller-7847b5c79c-xjn6d                    ingress-nginx
	5067d4f8e2769       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            About a minute ago   Running             gadget                                   0                   74acb3039883d       gadget-pp66d                                                 gadget
	ec49538f7adea       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   4dd1d9097995d       registry-proxy-pshgp                                         kube-system
	f1200919fca5b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   f43e236a4d560       csi-hostpathplugin-mtvgb                                     kube-system
	a13d9ca12c599       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              patch                                    0                   905650078b158       ingress-nginx-admission-patch-zpgsw                          ingress-nginx
	d092a1ce96421       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   5342dbf55cdff       local-path-provisioner-c44bcd496-tkk8q                       local-path-storage
	4e0aaeef1de81       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   31900044c0ccc       snapshot-controller-6588d87457-d2wxt                         kube-system
	71280a6e3cebf       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   ef926470073a8       csi-hostpath-resizer-0                                       kube-system
	43c7e5c70fe3b       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   1ade9459c4717       csi-hostpath-attacher-0                                      kube-system
	23b4a4280375c       nvcr.io/nvidia/k8s-device-plugin@sha256:10b7b747520ba2314061b5b319d3b2766b9cec1fd9404109c607e85b30af6905                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   a8256228b301a       nvidia-device-plugin-daemonset-pjdg8                         kube-system
	7fa724e7ac53c       ghcr.io/manusa/yakd@sha256:68bfcea671292190cdd2b127455726ac24794d1f7c55ce74c33d4648a3a0f50b                                                  About a minute ago   Running             yakd                                     0                   5e5673ae35490       yakd-dashboard-7bcf5795cd-t9fnr                              yakd-dashboard
	2d78a871a1296       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              create                                   0                   2f663bfaf3f41       ingress-nginx-admission-create-qshpb                         ingress-nginx
	0f6a0e2e497da       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   b5f5b512cf04a       snapshot-controller-6588d87457-qwg2n                         kube-system
	7e3ee321349e1       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   60bb8d98870ca       registry-788cd7d5bc-ts9v4                                    kube-system
	80ed86e984cde       gcr.io/cloud-spanner-emulator/emulator@sha256:084e511546640743b2d25fe2ee59800bc7ec910acfc12175bad2270f159f5eba                               About a minute ago   Running             cloud-spanner-emulator                   0                   3ebdb505a099c       cloud-spanner-emulator-5649ccbc87-th6pp                      default
	14bad4c028d24       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   a353a41fa285b       metrics-server-5778bb4788-gv7xc                              kube-system
	fc8b218ce7e0e       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   28d3f68d34dfd       kube-ingress-dns-minikube                                    kube-system
	d1202fc68a4be       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   b9dfa8bd57012       storage-provisioner                                          kube-system
	09b42f25358fc       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                                                             About a minute ago   Running             coredns                                  0                   99c307f8607c9       coredns-7d764666f9-7c7bm                                     kube-system
	7f5e40dfe3cba       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3                                           2 minutes ago        Running             kindnet-cni                              0                   73b81d888c2d9       kindnet-d4k24                                                kube-system
	0964d931910c1       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                                                             2 minutes ago        Running             kube-proxy                               0                   115f4ace4ebad       kube-proxy-s2jhs                                             kube-system
	4d0dcebd8d556       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                                                             2 minutes ago        Running             kube-apiserver                           0                   0e3c98d7b5377       kube-apiserver-addons-707536                                 kube-system
	e3dc9466eaccd       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                                                             2 minutes ago        Running             kube-scheduler                           0                   6ceaadbbb6190       kube-scheduler-addons-707536                                 kube-system
	8e743d1a3e50e       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                                                             2 minutes ago        Running             kube-controller-manager                  0                   abcad586cb6c9       kube-controller-manager-addons-707536                        kube-system
	f1b29c84acbf4       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                                                             2 minutes ago        Running             etcd                                     0                   5fb175dd91afb       etcd-addons-707536                                           kube-system
	
	
	==> coredns [09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0] <==
	[INFO] 10.244.0.18:53974 - 61908 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003478747s
	[INFO] 10.244.0.18:53974 - 13279 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000183353s
	[INFO] 10.244.0.18:53974 - 42501 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000184806s
	[INFO] 10.244.0.18:54842 - 29228 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00021007s
	[INFO] 10.244.0.18:54842 - 29450 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000303306s
	[INFO] 10.244.0.18:49055 - 12071 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000152962s
	[INFO] 10.244.0.18:49055 - 11858 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000188392s
	[INFO] 10.244.0.18:39421 - 15805 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000110869s
	[INFO] 10.244.0.18:39421 - 15608 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086581s
	[INFO] 10.244.0.18:38207 - 28096 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001348789s
	[INFO] 10.244.0.18:38207 - 28268 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001451707s
	[INFO] 10.244.0.18:45634 - 43046 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000144091s
	[INFO] 10.244.0.18:45634 - 42874 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000068038s
	[INFO] 10.244.0.20:41391 - 43199 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000236638s
	[INFO] 10.244.0.20:50490 - 25814 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000150689s
	[INFO] 10.244.0.20:39159 - 12133 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156678s
	[INFO] 10.244.0.20:52900 - 13199 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000121199s
	[INFO] 10.244.0.20:49427 - 12830 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000165072s
	[INFO] 10.244.0.20:33851 - 24265 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000074176s
	[INFO] 10.244.0.20:52526 - 56651 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002014855s
	[INFO] 10.244.0.20:37153 - 50071 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002249335s
	[INFO] 10.244.0.20:45852 - 12658 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001237936s
	[INFO] 10.244.0.20:42447 - 37164 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001432309s
	[INFO] 10.244.0.22:38107 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156154s
	[INFO] 10.244.0.22:56272 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000128756s
	
	
	==> describe nodes <==
	Name:               addons-707536
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-707536
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=addons-707536
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_10_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-707536
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-707536"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:10:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-707536
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:12:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:12:23 +0000   Mon, 29 Dec 2025 07:10:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:12:23 +0000   Mon, 29 Dec 2025 07:10:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:12:23 +0000   Mon, 29 Dec 2025 07:10:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:12:23 +0000   Mon, 29 Dec 2025 07:10:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-707536
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                9ca329d3-6d99-46d1-ad53-83c18ecf438d
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  default                     cloud-spanner-emulator-5649ccbc87-th6pp      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  gadget                      gadget-pp66d                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  gcp-auth                    gcp-auth-5bbcf684b5-r6znv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-xjn6d    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m
	  kube-system                 coredns-7d764666f9-7c7bm                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m6s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 csi-hostpathplugin-mtvgb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 etcd-addons-707536                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m12s
	  kube-system                 kindnet-d4k24                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m7s
	  kube-system                 kube-apiserver-addons-707536                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-controller-manager-addons-707536        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-s2jhs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-scheduler-addons-707536                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 metrics-server-5778bb4788-gv7xc              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m1s
	  kube-system                 nvidia-device-plugin-daemonset-pjdg8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 registry-788cd7d5bc-ts9v4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-creds-567fb78d95-dfv7q              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 registry-proxy-pshgp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 snapshot-controller-6588d87457-d2wxt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 snapshot-controller-6588d87457-qwg2n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  local-path-storage          local-path-provisioner-c44bcd496-tkk8q       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  yakd-dashboard              yakd-dashboard-7bcf5795cd-t9fnr              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  2m8s  node-controller  Node addons-707536 event: Registered Node addons-707536 in Controller
	
	
	==> dmesg <==
	[Dec29 05:19] hrtimer: interrupt took 3790300 ns
	[Dec29 06:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec29 06:43] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	[  +0.590854] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	[Dec29 07:08] kauditd_printk_skb: 8 callbacks suppressed
	[Dec29 07:10] overlayfs: idmapped layers are currently not supported
	[  +0.055546] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578] <==
	{"level":"info","ts":"2025-12-29T07:10:15.128835Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:10:15.184819Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-29T07:10:15.184932Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-29T07:10:15.185007Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2025-12-29T07:10:15.185066Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:10:15.185209Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:10:15.188840Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-29T07:10:15.188972Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:10:15.189016Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-12-29T07:10:15.189052Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-29T07:10:15.192885Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:10:15.197014Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-707536 ClientURLs:[https://192.168.49.2:2379]}","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:10:15.197188Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:10:15.197319Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:10:15.197414Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:10:15.197451Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:10:15.198284Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:10:15.200232Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-12-29T07:10:15.206288Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:10:15.207143Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:10:15.207537Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:10:15.207665Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:10:15.207738Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:10:15.212851Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T07:10:15.213004Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	
	
	==> gcp-auth [684b190b8e658efcf4d5ff5446dae86583fb09542446fb6380a4064551a5c25a] <==
	2025/12/29 07:11:33 GCP Auth Webhook started!
	2025/12/29 07:11:42 Ready to marshal response ...
	2025/12/29 07:11:42 Ready to write response ...
	2025/12/29 07:11:43 Ready to marshal response ...
	2025/12/29 07:11:43 Ready to write response ...
	2025/12/29 07:11:43 Ready to marshal response ...
	2025/12/29 07:11:43 Ready to write response ...
	2025/12/29 07:12:04 Ready to marshal response ...
	2025/12/29 07:12:04 Ready to write response ...
	2025/12/29 07:12:14 Ready to marshal response ...
	2025/12/29 07:12:14 Ready to write response ...
	2025/12/29 07:12:14 Ready to marshal response ...
	2025/12/29 07:12:14 Ready to write response ...
	2025/12/29 07:12:15 Ready to marshal response ...
	2025/12/29 07:12:15 Ready to write response ...
	2025/12/29 07:12:23 Ready to marshal response ...
	2025/12/29 07:12:23 Ready to write response ...
	
	
	==> kernel <==
	 07:12:32 up  3:55,  0 user,  load average: 1.71, 2.50, 2.75
	Linux addons-707536 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1] <==
	I1229 07:10:29.665267       1 controller.go:711] "Syncing nftables rules"
	I1229 07:10:39.505127       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 07:10:39.505259       1 main.go:301] handling current node
	I1229 07:10:49.510398       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 07:10:49.510428       1 main.go:301] handling current node
	I1229 07:10:59.505444       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 07:10:59.505476       1 main.go:301] handling current node
	I1229 07:11:09.505021       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 07:11:09.505081       1 main.go:301] handling current node
	I1229 07:11:19.505489       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 07:11:19.505547       1 main.go:301] handling current node
	I1229 07:11:29.505653       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 07:11:29.505686       1 main.go:301] handling current node
	I1229 07:11:39.505027       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 07:11:39.505060       1 main.go:301] handling current node
	I1229 07:11:49.505089       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 07:11:49.505123       1 main.go:301] handling current node
	I1229 07:11:59.506066       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 07:11:59.506107       1 main.go:301] handling current node
	I1229 07:12:09.506062       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 07:12:09.506110       1 main.go:301] handling current node
	I1229 07:12:19.505795       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 07:12:19.505841       1 main.go:301] handling current node
	I1229 07:12:29.506709       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1229 07:12:29.506814       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1] <==
	W1229 07:10:39.885410       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.238.193:443: connect: connection refused
	E1229 07:10:39.885455       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.238.193:443: connect: connection refused" logger="UnhandledError"
	W1229 07:10:39.896295       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.238.193:443: connect: connection refused
	E1229 07:10:39.896343       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.238.193:443: connect: connection refused" logger="UnhandledError"
	W1229 07:10:39.974186       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.238.193:443: connect: connection refused
	E1229 07:10:39.974229       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.238.193:443: connect: connection refused" logger="UnhandledError"
	W1229 07:10:54.520859       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1229 07:10:54.549473       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1229 07:10:54.667402       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1229 07:10:54.688715       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1229 07:11:03.416399       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.56.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.56.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.56.144:443: connect: connection refused" logger="UnhandledError"
	W1229 07:11:03.416624       1 handler_proxy.go:99] no RequestInfo found in the context
	E1229 07:11:03.416741       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1229 07:11:03.421194       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.56.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.56.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.56.144:443: connect: connection refused" logger="UnhandledError"
	E1229 07:11:03.423028       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.56.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.56.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.56.144:443: connect: connection refused" logger="UnhandledError"
	E1229 07:11:03.444181       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.56.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.56.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.56.144:443: connect: connection refused" logger="UnhandledError"
	E1229 07:11:03.492289       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.56.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.56.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.56.144:443: connect: connection refused" logger="UnhandledError"
	I1229 07:11:03.677836       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1229 07:11:51.421927       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50702: use of closed network connection
	E1229 07:11:51.659101       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50736: use of closed network connection
	I1229 07:12:23.982455       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1229 07:12:25.866320       1 watch.go:270] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	
	
	==> kube-controller-manager [8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37] <==
	I1229 07:10:24.481840       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:24.481886       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:24.482915       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:24.483021       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:24.483066       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:24.483443       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:24.483481       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:24.483727       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:24.483769       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:24.505139       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:10:24.506216       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:24.513233       1 range_allocator.go:433] "Set node PodCIDR" node="addons-707536" podCIDRs=["10.244.0.0/24"]
	I1229 07:10:24.578462       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:24.578488       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:10:24.578495       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:10:24.605856       1 shared_informer.go:377] "Caches are synced"
	E1229 07:10:31.505600       1 replica_set.go:592] "Unhandled Error" err="sync \"kube-system/metrics-server-5778bb4788\" failed with pods \"metrics-server-5778bb4788-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1229 07:10:44.459727       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	E1229 07:10:54.511658       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1229 07:10:54.511817       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1229 07:10:54.511871       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:10:54.612357       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:54.637571       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1229 07:10:54.656862       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:10:54.757427       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854] <==
	I1229 07:10:26.611006       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:10:26.722537       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:10:26.823432       1 shared_informer.go:377] "Caches are synced"
	I1229 07:10:26.823474       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1229 07:10:26.823540       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:10:26.857785       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:10:26.857845       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:10:26.862264       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:10:26.862564       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:10:26.862577       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:10:26.866175       1 config.go:200] "Starting service config controller"
	I1229 07:10:26.866193       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:10:26.866232       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:10:26.866236       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:10:26.866249       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:10:26.866253       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:10:26.866944       1 config.go:309] "Starting node config controller"
	I1229 07:10:26.866952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:10:26.866958       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:10:26.966930       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:10:26.966939       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:10:26.966977       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f] <==
	E1229 07:10:17.823345       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:10:17.823446       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:10:17.823561       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:10:17.825867       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:10:17.825998       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:10:17.826119       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:10:17.826227       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:10:17.826325       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:10:17.826461       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:10:17.826561       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:10:17.842317       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:10:17.842341       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:10:17.842544       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:10:17.842643       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:10:17.842739       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:10:18.615160       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1229 07:10:18.650213       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:10:18.682397       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:10:18.764201       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:10:18.786761       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:10:18.821159       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:10:18.856757       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:10:18.862009       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:10:18.867644       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	I1229 07:10:21.155920       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:12:25 addons-707536 kubelet[1256]: I1229 07:12:25.549745    1256 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-f5b63983-7539-4ae8-9848-1bd5a8131ea1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^b3659f08-e485-11f0-a966-8e370994cf32\") on node \"addons-707536\" "
	Dec 29 07:12:25 addons-707536 kubelet[1256]: I1229 07:12:25.554960    1256 operation_generator.go:892] UnmountDevice succeeded for volume "pvc-f5b63983-7539-4ae8-9848-1bd5a8131ea1" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^b3659f08-e485-11f0-a966-8e370994cf32") on node "addons-707536"
	Dec 29 07:12:25 addons-707536 kubelet[1256]: I1229 07:12:25.650674    1256 reconciler_common.go:299] "Volume detached for volume \"pvc-f5b63983-7539-4ae8-9848-1bd5a8131ea1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^b3659f08-e485-11f0-a966-8e370994cf32\") on node \"addons-707536\" DevicePath \"\""
	Dec 29 07:12:25 addons-707536 kubelet[1256]: I1229 07:12:25.834102    1256 scope.go:122] "RemoveContainer" containerID="c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771"
	Dec 29 07:12:25 addons-707536 kubelet[1256]: I1229 07:12:25.846681    1256 scope.go:122] "RemoveContainer" containerID="c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771"
	Dec 29 07:12:25 addons-707536 kubelet[1256]: E1229 07:12:25.848998    1256 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771\": container with ID starting with c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771 not found: ID does not exist" containerID="c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771"
	Dec 29 07:12:25 addons-707536 kubelet[1256]: I1229 07:12:25.849060    1256 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771"} err="failed to get container status \"c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771\": rpc error: code = NotFound desc = could not find container \"c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771\": container with ID starting with c54de006d36ada64748ca3e07f8ab05f9cc2ce739577b5488872fa232ee41771 not found: ID does not exist"
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.057481    1256 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/12036192-c344-413d-bcba-d2498e954c3a-gcp-creds\" (UniqueName: \"kubernetes.io/host-path/12036192-c344-413d-bcba-d2498e954c3a-gcp-creds\") pod \"12036192-c344-413d-bcba-d2498e954c3a\" (UID: \"12036192-c344-413d-bcba-d2498e954c3a\") "
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.057532    1256 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/12036192-c344-413d-bcba-d2498e954c3a-data\" (UniqueName: \"kubernetes.io/host-path/12036192-c344-413d-bcba-d2498e954c3a-data\") pod \"12036192-c344-413d-bcba-d2498e954c3a\" (UID: \"12036192-c344-413d-bcba-d2498e954c3a\") "
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.057609    1256 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12036192-c344-413d-bcba-d2498e954c3a-gcp-creds" pod "12036192-c344-413d-bcba-d2498e954c3a" (UID: "12036192-c344-413d-bcba-d2498e954c3a"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.057678    1256 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/12036192-c344-413d-bcba-d2498e954c3a-kube-api-access-7tf7t\" (UniqueName: \"kubernetes.io/projected/12036192-c344-413d-bcba-d2498e954c3a-kube-api-access-7tf7t\") pod \"12036192-c344-413d-bcba-d2498e954c3a\" (UID: \"12036192-c344-413d-bcba-d2498e954c3a\") "
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.057715    1256 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12036192-c344-413d-bcba-d2498e954c3a-data" pod "12036192-c344-413d-bcba-d2498e954c3a" (UID: "12036192-c344-413d-bcba-d2498e954c3a"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.057741    1256 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/12036192-c344-413d-bcba-d2498e954c3a-script\" (UniqueName: \"kubernetes.io/configmap/12036192-c344-413d-bcba-d2498e954c3a-script\") pod \"12036192-c344-413d-bcba-d2498e954c3a\" (UID: \"12036192-c344-413d-bcba-d2498e954c3a\") "
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.058228    1256 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/12036192-c344-413d-bcba-d2498e954c3a-gcp-creds\") on node \"addons-707536\" DevicePath \"\""
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.058294    1256 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/12036192-c344-413d-bcba-d2498e954c3a-data\") on node \"addons-707536\" DevicePath \"\""
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.058623    1256 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12036192-c344-413d-bcba-d2498e954c3a-script" pod "12036192-c344-413d-bcba-d2498e954c3a" (UID: "12036192-c344-413d-bcba-d2498e954c3a"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.065269    1256 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12036192-c344-413d-bcba-d2498e954c3a-kube-api-access-7tf7t" pod "12036192-c344-413d-bcba-d2498e954c3a" (UID: "12036192-c344-413d-bcba-d2498e954c3a"). InnerVolumeSpecName "kube-api-access-7tf7t". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.159383    1256 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7tf7t\" (UniqueName: \"kubernetes.io/projected/12036192-c344-413d-bcba-d2498e954c3a-kube-api-access-7tf7t\") on node \"addons-707536\" DevicePath \"\""
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.159449    1256 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/12036192-c344-413d-bcba-d2498e954c3a-script\") on node \"addons-707536\" DevicePath \"\""
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.632459    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="21614047-7518-4bd4-b89e-f857f65461b3" path="/var/lib/kubelet/pods/21614047-7518-4bd4-b89e-f857f65461b3/volumes"
	Dec 29 07:12:26 addons-707536 kubelet[1256]: I1229 07:12:26.840317    1256 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44748fd73c728be8fa81c22f62e5effeab3b1eb47f7cee7bd7550eb9eadd174a"
	Dec 29 07:12:26 addons-707536 kubelet[1256]: E1229 07:12:26.842140    1256 status_manager.go:1045] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-9b09faec-d460-4016-a891-cede84331133\" is forbidden: User \"system:node:addons-707536\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-707536' and this object" podUID="12036192-c344-413d-bcba-d2498e954c3a" pod="local-path-storage/helper-pod-delete-pvc-9b09faec-d460-4016-a891-cede84331133"
	Dec 29 07:12:28 addons-707536 kubelet[1256]: I1229 07:12:28.629651    1256 kubelet_pods.go:1079] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-pjdg8" secret="" err="secret \"gcp-auth\" not found"
	Dec 29 07:12:28 addons-707536 kubelet[1256]: I1229 07:12:28.632912    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="12036192-c344-413d-bcba-d2498e954c3a" path="/var/lib/kubelet/pods/12036192-c344-413d-bcba-d2498e954c3a/volumes"
	Dec 29 07:12:30 addons-707536 kubelet[1256]: I1229 07:12:30.630598    1256 kubelet_pods.go:1079] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-pshgp" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4] <==
	W1229 07:12:07.279974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:09.283338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:09.288015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:11.291079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:11.299398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:13.302053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:13.307267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:15.310489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:15.317697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:17.321440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:17.327899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:19.331851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:19.338664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:21.341925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:21.346746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:23.349590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:23.355445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:25.362317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:25.369773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:27.374179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:27.381713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:29.385275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:29.389802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:31.394002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:12:31.403597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-707536 -n addons-707536
helpers_test.go:270: (dbg) Run:  kubectl --context addons-707536 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-qshpb ingress-nginx-admission-patch-zpgsw registry-creds-567fb78d95-dfv7q
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-707536 describe pod ingress-nginx-admission-create-qshpb ingress-nginx-admission-patch-zpgsw registry-creds-567fb78d95-dfv7q
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-707536 describe pod ingress-nginx-admission-create-qshpb ingress-nginx-admission-patch-zpgsw registry-creds-567fb78d95-dfv7q: exit status 1 (92.31401ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qshpb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zpgsw" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-dfv7q" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-707536 describe pod ingress-nginx-admission-create-qshpb ingress-nginx-admission-patch-zpgsw registry-creds-567fb78d95-dfv7q: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable headlamp --alsologtostderr -v=1: exit status 11 (274.788596ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:12:33.471291  750501 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:12:33.471909  750501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:33.471943  750501 out.go:374] Setting ErrFile to fd 2...
	I1229 07:12:33.471963  750501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:33.472252  750501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:12:33.472624  750501 mustload.go:66] Loading cluster: addons-707536
	I1229 07:12:33.473079  750501 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:33.473124  750501 addons.go:622] checking whether the cluster is paused
	I1229 07:12:33.473311  750501 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:33.473344  750501 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:12:33.473921  750501 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:12:33.491896  750501 ssh_runner.go:195] Run: systemctl --version
	I1229 07:12:33.491966  750501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:12:33.509462  750501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:12:33.619786  750501 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:12:33.619870  750501 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:12:33.667605  750501 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:12:33.667669  750501 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:12:33.667690  750501 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:12:33.667711  750501 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:12:33.667748  750501 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:12:33.667769  750501 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:12:33.667788  750501 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:12:33.667807  750501 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:12:33.667828  750501 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:12:33.667867  750501 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:12:33.667892  750501 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:12:33.667912  750501 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:12:33.667934  750501 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:12:33.667954  750501 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:12:33.667983  750501 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:12:33.668020  750501 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:12:33.668041  750501 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:12:33.668062  750501 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:12:33.668082  750501 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:12:33.668115  750501 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:12:33.668134  750501 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:12:33.668153  750501 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:12:33.668173  750501 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:12:33.668202  750501 cri.go:96] found id: ""
	I1229 07:12:33.668288  750501 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:12:33.684173  750501 out.go:203] 
	W1229 07:12:33.687155  750501 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:12:33.687184  750501 out.go:285] * 
	* 
	W1229 07:12:33.690643  750501 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:12:33.693437  750501 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.20s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-th6pp" [49690671-0daa-489e-a28b-beb3bb1fbc67] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003353125s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (292.454625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:12:30.260504  749969 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:12:30.265754  749969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:30.265832  749969 out.go:374] Setting ErrFile to fd 2...
	I1229 07:12:30.265855  749969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:30.266171  749969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:12:30.267437  749969 mustload.go:66] Loading cluster: addons-707536
	I1229 07:12:30.268029  749969 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:30.268271  749969 addons.go:622] checking whether the cluster is paused
	I1229 07:12:30.268463  749969 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:30.268527  749969 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:12:30.269366  749969 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:12:30.300447  749969 ssh_runner.go:195] Run: systemctl --version
	I1229 07:12:30.300507  749969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:12:30.327895  749969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:12:30.439062  749969 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:12:30.439163  749969 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:12:30.472743  749969 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:12:30.472762  749969 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:12:30.472767  749969 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:12:30.472770  749969 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:12:30.472773  749969 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:12:30.472777  749969 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:12:30.472813  749969 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:12:30.472817  749969 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:12:30.472821  749969 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:12:30.472827  749969 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:12:30.472830  749969 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:12:30.472834  749969 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:12:30.472847  749969 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:12:30.472850  749969 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:12:30.472853  749969 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:12:30.472858  749969 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:12:30.472861  749969 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:12:30.472865  749969 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:12:30.472875  749969 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:12:30.472879  749969 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:12:30.472883  749969 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:12:30.472886  749969 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:12:30.472889  749969 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:12:30.472892  749969 cri.go:96] found id: ""
	I1229 07:12:30.472940  749969 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:12:30.487560  749969 out.go:203] 
	W1229 07:12:30.490489  749969 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:12:30.490517  749969 out.go:285] * 
	* 
	W1229 07:12:30.493777  749969 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:12:30.496647  749969 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.31s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.69s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-707536 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-707536 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-707536 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [5b9bc7a4-5778-4959-8048-8f6470eeee2d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [5b9bc7a4-5778-4959-8048-8f6470eeee2d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [5b9bc7a4-5778-4959-8048-8f6470eeee2d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.008315661s
addons_test.go:969: (dbg) Run:  kubectl --context addons-707536 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 ssh "cat /opt/local-path-provisioner/pvc-9b09faec-d460-4016-a891-cede84331133_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-707536 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-707536 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (411.222688ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:12:23.841939  749769 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:12:23.842793  749769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:23.842811  749769 out.go:374] Setting ErrFile to fd 2...
	I1229 07:12:23.842818  749769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:23.843109  749769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:12:23.843443  749769 mustload.go:66] Loading cluster: addons-707536
	I1229 07:12:23.843839  749769 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:23.843863  749769 addons.go:622] checking whether the cluster is paused
	I1229 07:12:23.844031  749769 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:23.844049  749769 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:12:23.844599  749769 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:12:23.869608  749769 ssh_runner.go:195] Run: systemctl --version
	I1229 07:12:23.869680  749769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:12:23.896108  749769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:12:24.034616  749769 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:12:24.034705  749769 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:12:24.138798  749769 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:12:24.138818  749769 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:12:24.138823  749769 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:12:24.138827  749769 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:12:24.138830  749769 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:12:24.138834  749769 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:12:24.138838  749769 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:12:24.138841  749769 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:12:24.138848  749769 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:12:24.138854  749769 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:12:24.138857  749769 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:12:24.138861  749769 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:12:24.138864  749769 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:12:24.138866  749769 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:12:24.138870  749769 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:12:24.138874  749769 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:12:24.138877  749769 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:12:24.138892  749769 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:12:24.138896  749769 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:12:24.138899  749769 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:12:24.138904  749769 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:12:24.138907  749769 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:12:24.138910  749769 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:12:24.138913  749769 cri.go:96] found id: ""
	I1229 07:12:24.138964  749769 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:12:24.180755  749769 out.go:203] 
	W1229 07:12:24.184338  749769 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:12:24.184365  749769 out.go:285] * 
	* 
	W1229 07:12:24.187591  749769 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:12:24.190948  749769 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.69s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-pjdg8" [8049ef46-c6ec-4e47-9c0b-7e287d7c8874] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003475915s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (261.820583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:12:14.295069  749243 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:12:14.295911  749243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:14.295949  749243 out.go:374] Setting ErrFile to fd 2...
	I1229 07:12:14.295968  749243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:12:14.296280  749243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:12:14.296672  749243 mustload.go:66] Loading cluster: addons-707536
	I1229 07:12:14.297251  749243 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:14.297304  749243 addons.go:622] checking whether the cluster is paused
	I1229 07:12:14.297482  749243 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:12:14.297523  749243 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:12:14.298105  749243 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:12:14.318367  749243 ssh_runner.go:195] Run: systemctl --version
	I1229 07:12:14.318443  749243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:12:14.336437  749243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:12:14.443616  749243 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:12:14.443720  749243 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:12:14.473390  749243 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:12:14.473413  749243 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:12:14.473418  749243 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:12:14.473422  749243 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:12:14.473426  749243 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:12:14.473430  749243 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:12:14.473433  749243 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:12:14.473436  749243 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:12:14.473448  749243 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:12:14.473454  749243 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:12:14.473458  749243 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:12:14.473465  749243 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:12:14.473468  749243 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:12:14.473472  749243 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:12:14.473480  749243 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:12:14.473485  749243 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:12:14.473488  749243 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:12:14.473491  749243 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:12:14.473494  749243 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:12:14.473497  749243 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:12:14.473502  749243 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:12:14.473510  749243 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:12:14.473513  749243 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:12:14.473516  749243 cri.go:96] found id: ""
	I1229 07:12:14.473564  749243 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:12:14.493425  749243 out.go:203] 
	W1229 07:12:14.496652  749243 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:12:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:12:14.496679  749243 out.go:285] * 
	* 
	W1229 07:12:14.500010  749243 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:12:14.502941  749243 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-t9fnr" [d396cfb3-5146-4b67-b54a-5a105e8323b6] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.009286574s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-707536 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-707536 addons disable yakd --alsologtostderr -v=1: exit status 11 (260.68124ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:11:58.110303  748895 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:11:58.111155  748895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:11:58.111205  748895 out.go:374] Setting ErrFile to fd 2...
	I1229 07:11:58.111230  748895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:11:58.111669  748895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:11:58.112093  748895 mustload.go:66] Loading cluster: addons-707536
	I1229 07:11:58.112875  748895 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:11:58.112930  748895 addons.go:622] checking whether the cluster is paused
	I1229 07:11:58.113370  748895 config.go:182] Loaded profile config "addons-707536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:11:58.113422  748895 host.go:66] Checking if "addons-707536" exists ...
	I1229 07:11:58.114003  748895 cli_runner.go:164] Run: docker container inspect addons-707536 --format={{.State.Status}}
	I1229 07:11:58.131415  748895 ssh_runner.go:195] Run: systemctl --version
	I1229 07:11:58.131471  748895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-707536
	I1229 07:11:58.150171  748895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/addons-707536/id_rsa Username:docker}
	I1229 07:11:58.255321  748895 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:11:58.255454  748895 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:11:58.293933  748895 cri.go:96] found id: "a8c420d116959cf9c30315e42be85e8ae0ab442a4109ff6f39fe30d15f28cd45"
	I1229 07:11:58.293960  748895 cri.go:96] found id: "e02d944b3973760ba2fd35313e466160766113d49a15de67f37f3e3926345514"
	I1229 07:11:58.293966  748895 cri.go:96] found id: "9c92f38ed26e291da3f9383b3d3d1db8fb54917bc10d98c0eb16a629a6f9da5f"
	I1229 07:11:58.293970  748895 cri.go:96] found id: "6cac5ee1eec0e1ad76f0216bb638eb76184ad7532f79e629db3a1d0001a02055"
	I1229 07:11:58.293974  748895 cri.go:96] found id: "1514517a2064e0a1b00309758b7688b5fc7a9efbe33168eb78ae248cef816f73"
	I1229 07:11:58.293978  748895 cri.go:96] found id: "ec49538f7adeadb7143ea1a8a3ab270c1c5af62f260dc8b07057a2e89a486bf2"
	I1229 07:11:58.293981  748895 cri.go:96] found id: "f1200919fca5b16e9d4b36872092cea971a390a2b7686c67d36fe9a154c6f945"
	I1229 07:11:58.293984  748895 cri.go:96] found id: "4e0aaeef1de817fba2434995308c7f1944d05f8cf59918cbc4a8375e2e48f2b9"
	I1229 07:11:58.293988  748895 cri.go:96] found id: "71280a6e3cebf6c615cb2bb63eef7e79c9101baa184a51ee6f32d6ef407bef0c"
	I1229 07:11:58.293995  748895 cri.go:96] found id: "43c7e5c70fe3b354a3580e8c15d9ecadae5abec01536290529de9f610e4174d3"
	I1229 07:11:58.294000  748895 cri.go:96] found id: "23b4a4280375c2787a98a49af43a06171c5e02bb6dc03fa50cec06f14a9376f7"
	I1229 07:11:58.294003  748895 cri.go:96] found id: "0f6a0e2e497da0656f07bdf5ce522ce70d922822a6b615f1cdea91ac59134266"
	I1229 07:11:58.294007  748895 cri.go:96] found id: "7e3ee321349e1b77e9da4027e25e1c8f6741f0e10c7fe93f66d3d717184f44d7"
	I1229 07:11:58.294010  748895 cri.go:96] found id: "14bad4c028d247f9998910db129bbd0b7a44377ecdfb37383f4bab191de1dc3b"
	I1229 07:11:58.294014  748895 cri.go:96] found id: "fc8b218ce7e0e86cce5af9e53f05d608131bde809b7d34be2bb85c8523fe0bbc"
	I1229 07:11:58.294019  748895 cri.go:96] found id: "d1202fc68a4bede1f5b561bf609adbfaf763f616f53ebf335c1ce25fcf8710a4"
	I1229 07:11:58.294022  748895 cri.go:96] found id: "09b42f25358fc5c40ededfe6d7112ad8ea329e64a371b9b9e8e5b39ea824e8f0"
	I1229 07:11:58.294031  748895 cri.go:96] found id: "7f5e40dfe3cba40e82ea5342f39794e41f40c1e3f1e5c0ef669049ab381fbdc1"
	I1229 07:11:58.294039  748895 cri.go:96] found id: "0964d931910c1ce463926d2a2c95d49c2acfe2f97a7497d76eb297975ad67854"
	I1229 07:11:58.294042  748895 cri.go:96] found id: "4d0dcebd8d5564f2478aad0264ad530eef07696a1e9879200a3b32d2f4e610b1"
	I1229 07:11:58.294047  748895 cri.go:96] found id: "e3dc9466eaccd8f3821b2b6d4d2285ba60a85e383f74a6fe203c718a8a591d3f"
	I1229 07:11:58.294050  748895 cri.go:96] found id: "8e743d1a3e50ef6a18e396ac2b4e13bfb29746a5b4e1566e9949752489279c37"
	I1229 07:11:58.294053  748895 cri.go:96] found id: "f1b29c84acbf493cf9582d44d4030a08196b5b75aa103b15c1e6c3327ccc2578"
	I1229 07:11:58.294061  748895 cri.go:96] found id: ""
	I1229 07:11:58.294111  748895 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:11:58.310407  748895 out.go:203] 
	W1229 07:11:58.313311  748895 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:11:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:11:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:11:58.313349  748895 out.go:285] * 
	* 
	W1229 07:11:58.316573  748895 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:11:58.319611  748895 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-707536 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestForceSystemdFlag (504.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-122604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1229 07:49:46.343928  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-122604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m19.990797227s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-122604] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-122604" primary control-plane node in "force-systemd-flag-122604" cluster
	* Pulling base image v0.0.48-1766979815-22353 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:49:35.249268  913056 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:49:35.249374  913056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:49:35.249415  913056 out.go:374] Setting ErrFile to fd 2...
	I1229 07:49:35.249422  913056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:49:35.249683  913056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:49:35.250107  913056 out.go:368] Setting JSON to false
	I1229 07:49:35.250963  913056 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16327,"bootTime":1766978248,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:49:35.251033  913056 start.go:143] virtualization:  
	I1229 07:49:35.255392  913056 out.go:179] * [force-systemd-flag-122604] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:49:35.260063  913056 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:49:35.260132  913056 notify.go:221] Checking for updates...
	I1229 07:49:35.263828  913056 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:49:35.267214  913056 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:49:35.270527  913056 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:49:35.273814  913056 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:49:35.276989  913056 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:49:35.280654  913056 config.go:182] Loaded profile config "force-systemd-env-002562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:49:35.280796  913056 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:49:35.313105  913056 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:49:35.313219  913056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:49:35.376584  913056 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:49:35.367132861 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:49:35.376694  913056 docker.go:319] overlay module found
	I1229 07:49:35.382052  913056 out.go:179] * Using the docker driver based on user configuration
	I1229 07:49:35.384975  913056 start.go:309] selected driver: docker
	I1229 07:49:35.384996  913056 start.go:928] validating driver "docker" against <nil>
	I1229 07:49:35.385009  913056 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:49:35.385763  913056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:49:35.441289  913056 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:49:35.431732978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:49:35.441443  913056 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:49:35.441658  913056 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 07:49:35.444852  913056 out.go:179] * Using Docker driver with root privileges
	I1229 07:49:35.447908  913056 cni.go:84] Creating CNI manager for ""
	I1229 07:49:35.447976  913056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:49:35.447989  913056 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:49:35.448076  913056 start.go:353] cluster config:
	{Name:force-systemd-flag-122604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-122604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:49:35.451515  913056 out.go:179] * Starting "force-systemd-flag-122604" primary control-plane node in "force-systemd-flag-122604" cluster
	I1229 07:49:35.454354  913056 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:49:35.457378  913056 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:49:35.460206  913056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:49:35.460266  913056 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:49:35.460284  913056 cache.go:65] Caching tarball of preloaded images
	I1229 07:49:35.460290  913056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:49:35.460376  913056 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:49:35.460387  913056 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:49:35.460495  913056 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/config.json ...
	I1229 07:49:35.460512  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/config.json: {Name:mk1bf0ef3d904e9a3fc3e557b78cfdfa2369b401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:35.479890  913056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:49:35.479915  913056 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:49:35.479933  913056 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:49:35.479965  913056 start.go:360] acquireMachinesLock for force-systemd-flag-122604: {Name:mkce59187063294ba30aa11e10ce77e64adfbebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:49:35.480070  913056 start.go:364] duration metric: took 83.291µs to acquireMachinesLock for "force-systemd-flag-122604"
	I1229 07:49:35.480101  913056 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-122604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-122604 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:49:35.480180  913056 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:49:35.485356  913056 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:49:35.485611  913056 start.go:159] libmachine.API.Create for "force-systemd-flag-122604" (driver="docker")
	I1229 07:49:35.485654  913056 client.go:173] LocalClient.Create starting
	I1229 07:49:35.485728  913056 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 07:49:35.485772  913056 main.go:144] libmachine: Decoding PEM data...
	I1229 07:49:35.485790  913056 main.go:144] libmachine: Parsing certificate...
	I1229 07:49:35.485846  913056 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 07:49:35.485869  913056 main.go:144] libmachine: Decoding PEM data...
	I1229 07:49:35.485884  913056 main.go:144] libmachine: Parsing certificate...
	I1229 07:49:35.486262  913056 cli_runner.go:164] Run: docker network inspect force-systemd-flag-122604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:49:35.501878  913056 cli_runner.go:211] docker network inspect force-systemd-flag-122604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:49:35.501973  913056 network_create.go:284] running [docker network inspect force-systemd-flag-122604] to gather additional debugging logs...
	I1229 07:49:35.501996  913056 cli_runner.go:164] Run: docker network inspect force-systemd-flag-122604
	W1229 07:49:35.518029  913056 cli_runner.go:211] docker network inspect force-systemd-flag-122604 returned with exit code 1
	I1229 07:49:35.518074  913056 network_create.go:287] error running [docker network inspect force-systemd-flag-122604]: docker network inspect force-systemd-flag-122604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-122604 not found
	I1229 07:49:35.518111  913056 network_create.go:289] output of [docker network inspect force-systemd-flag-122604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-122604 not found
	
	** /stderr **
	I1229 07:49:35.518212  913056 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:49:35.540507  913056 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
	I1229 07:49:35.540966  913056 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce89f48ecd30 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:ee:74:81:93:1a} reservation:<nil>}
	I1229 07:49:35.541445  913056 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e2ff2b4ebfe IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:c7:24:a7:9b:5d} reservation:<nil>}
	I1229 07:49:35.541992  913056 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fefd0}
	I1229 07:49:35.542034  913056 network_create.go:124] attempt to create docker network force-systemd-flag-122604 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1229 07:49:35.542113  913056 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-122604 force-systemd-flag-122604
	I1229 07:49:35.607706  913056 network_create.go:108] docker network force-systemd-flag-122604 192.168.76.0/24 created
	I1229 07:49:35.607742  913056 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-122604" container
	I1229 07:49:35.607816  913056 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:49:35.624216  913056 cli_runner.go:164] Run: docker volume create force-systemd-flag-122604 --label name.minikube.sigs.k8s.io=force-systemd-flag-122604 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:49:35.643301  913056 oci.go:103] Successfully created a docker volume force-systemd-flag-122604
	I1229 07:49:35.643389  913056 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-122604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-122604 --entrypoint /usr/bin/test -v force-systemd-flag-122604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:49:36.199718  913056 oci.go:107] Successfully prepared a docker volume force-systemd-flag-122604
	I1229 07:49:36.199781  913056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:49:36.199791  913056 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:49:36.199875  913056 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-122604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:49:40.097554  913056 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-122604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.897621835s)
	I1229 07:49:40.097592  913056 kic.go:203] duration metric: took 3.897797409s to extract preloaded images to volume ...
	W1229 07:49:40.097741  913056 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:49:40.097863  913056 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:49:40.150163  913056 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-122604 --name force-systemd-flag-122604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-122604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-122604 --network force-systemd-flag-122604 --ip 192.168.76.2 --volume force-systemd-flag-122604:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:49:40.465542  913056 cli_runner.go:164] Run: docker container inspect force-systemd-flag-122604 --format={{.State.Running}}
	I1229 07:49:40.489118  913056 cli_runner.go:164] Run: docker container inspect force-systemd-flag-122604 --format={{.State.Status}}
	I1229 07:49:40.519877  913056 cli_runner.go:164] Run: docker exec force-systemd-flag-122604 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:49:40.581583  913056 oci.go:144] the created container "force-systemd-flag-122604" has a running status.
	I1229 07:49:40.581611  913056 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa...
	I1229 07:49:40.787556  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1229 07:49:40.787654  913056 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:49:40.815266  913056 cli_runner.go:164] Run: docker container inspect force-systemd-flag-122604 --format={{.State.Status}}
	I1229 07:49:40.842840  913056 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:49:40.842867  913056 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-122604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:49:40.909683  913056 cli_runner.go:164] Run: docker container inspect force-systemd-flag-122604 --format={{.State.Status}}
	I1229 07:49:40.940519  913056 machine.go:94] provisionDockerMachine start ...
	I1229 07:49:40.940720  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:40.971534  913056 main.go:144] libmachine: Using SSH client type: native
	I1229 07:49:40.971879  913056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33803 <nil> <nil>}
	I1229 07:49:40.971888  913056 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:49:40.972474  913056 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59008->127.0.0.1:33803: read: connection reset by peer
	I1229 07:49:44.128475  913056 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-122604
	
	I1229 07:49:44.128498  913056 ubuntu.go:182] provisioning hostname "force-systemd-flag-122604"
	I1229 07:49:44.128576  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:44.145938  913056 main.go:144] libmachine: Using SSH client type: native
	I1229 07:49:44.146256  913056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33803 <nil> <nil>}
	I1229 07:49:44.146298  913056 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-122604 && echo "force-systemd-flag-122604" | sudo tee /etc/hostname
	I1229 07:49:44.314558  913056 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-122604
	
	I1229 07:49:44.314638  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:44.333419  913056 main.go:144] libmachine: Using SSH client type: native
	I1229 07:49:44.333768  913056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33803 <nil> <nil>}
	I1229 07:49:44.333793  913056 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-122604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-122604/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-122604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:49:44.485315  913056 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:49:44.485355  913056 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:49:44.485377  913056 ubuntu.go:190] setting up certificates
	I1229 07:49:44.485392  913056 provision.go:84] configureAuth start
	I1229 07:49:44.485474  913056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-122604
	I1229 07:49:44.506460  913056 provision.go:143] copyHostCerts
	I1229 07:49:44.506524  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:49:44.506575  913056 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:49:44.506586  913056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:49:44.506728  913056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:49:44.506861  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:49:44.506886  913056 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:49:44.506894  913056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:49:44.506924  913056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:49:44.507003  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:49:44.507059  913056 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:49:44.507068  913056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:49:44.507106  913056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:49:44.507195  913056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-122604 san=[127.0.0.1 192.168.76.2 force-systemd-flag-122604 localhost minikube]
	I1229 07:49:44.771898  913056 provision.go:177] copyRemoteCerts
	I1229 07:49:44.771968  913056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:49:44.772035  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:44.790622  913056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa Username:docker}
	I1229 07:49:44.897529  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:49:44.897674  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:49:44.915578  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:49:44.915649  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1229 07:49:44.933629  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:49:44.933718  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:49:44.955290  913056 provision.go:87] duration metric: took 469.882491ms to configureAuth
	I1229 07:49:44.955315  913056 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:49:44.955514  913056 config.go:182] Loaded profile config "force-systemd-flag-122604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:49:44.955627  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:44.975562  913056 main.go:144] libmachine: Using SSH client type: native
	I1229 07:49:44.975870  913056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33803 <nil> <nil>}
	I1229 07:49:44.975885  913056 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:49:45.363967  913056 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:49:45.363992  913056 machine.go:97] duration metric: took 4.423451846s to provisionDockerMachine
	I1229 07:49:45.364004  913056 client.go:176] duration metric: took 9.87833796s to LocalClient.Create
	I1229 07:49:45.364019  913056 start.go:167] duration metric: took 9.878409371s to libmachine.API.Create "force-systemd-flag-122604"
	I1229 07:49:45.364026  913056 start.go:293] postStartSetup for "force-systemd-flag-122604" (driver="docker")
	I1229 07:49:45.364052  913056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:49:45.364120  913056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:49:45.364171  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:45.382554  913056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa Username:docker}
	I1229 07:49:45.489966  913056 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:49:45.493442  913056 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:49:45.493526  913056 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:49:45.493545  913056 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:49:45.493604  913056 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:49:45.493693  913056 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:49:45.493705  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> /etc/ssl/certs/7418542.pem
	I1229 07:49:45.493816  913056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:49:45.501496  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:49:45.518943  913056 start.go:296] duration metric: took 154.900452ms for postStartSetup
	I1229 07:49:45.519317  913056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-122604
	I1229 07:49:45.536352  913056 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/config.json ...
	I1229 07:49:45.536636  913056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:49:45.536677  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:45.553541  913056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa Username:docker}
	I1229 07:49:45.658023  913056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:49:45.663073  913056 start.go:128] duration metric: took 10.182874668s to createHost
	I1229 07:49:45.663102  913056 start.go:83] releasing machines lock for "force-systemd-flag-122604", held for 10.183016109s
	I1229 07:49:45.663233  913056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-122604
	I1229 07:49:45.680754  913056 ssh_runner.go:195] Run: cat /version.json
	I1229 07:49:45.680838  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:45.680893  913056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:49:45.680960  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:45.706663  913056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa Username:docker}
	I1229 07:49:45.714684  913056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa Username:docker}
	I1229 07:49:45.905273  913056 ssh_runner.go:195] Run: systemctl --version
	I1229 07:49:45.911874  913056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:49:45.949399  913056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:49:45.953891  913056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:49:45.953973  913056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:49:45.982134  913056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:49:45.982158  913056 start.go:496] detecting cgroup driver to use...
	I1229 07:49:45.982172  913056 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1229 07:49:45.982252  913056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:49:46.001208  913056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:49:46.016717  913056 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:49:46.016840  913056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:49:46.035267  913056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:49:46.054195  913056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:49:46.176312  913056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:49:46.301685  913056 docker.go:234] disabling docker service ...
	I1229 07:49:46.301756  913056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:49:46.323995  913056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:49:46.337586  913056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:49:46.475207  913056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:49:46.604163  913056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:49:46.618065  913056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:49:46.632608  913056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:49:46.632764  913056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.642122  913056 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:49:46.642195  913056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.651077  913056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.659900  913056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.669139  913056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:49:46.677350  913056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.686430  913056 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.700738  913056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.709792  913056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:49:46.717095  913056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:49:46.724633  913056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:49:46.837291  913056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:49:47.013749  913056 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:49:47.013859  913056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:49:47.017928  913056 start.go:574] Will wait 60s for crictl version
	I1229 07:49:47.018025  913056 ssh_runner.go:195] Run: which crictl
	I1229 07:49:47.021782  913056 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:49:47.051254  913056 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:49:47.051342  913056 ssh_runner.go:195] Run: crio --version
	I1229 07:49:47.080006  913056 ssh_runner.go:195] Run: crio --version
	I1229 07:49:47.113981  913056 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:49:47.116929  913056 cli_runner.go:164] Run: docker network inspect force-systemd-flag-122604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:49:47.134029  913056 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1229 07:49:47.137842  913056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:49:47.148094  913056 kubeadm.go:884] updating cluster {Name:force-systemd-flag-122604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-122604 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:49:47.148233  913056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:49:47.148298  913056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:49:47.189088  913056 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:49:47.189116  913056 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:49:47.189175  913056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:49:47.227010  913056 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:49:47.227034  913056 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:49:47.227042  913056 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1229 07:49:47.227131  913056 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-122604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-122604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:49:47.227213  913056 ssh_runner.go:195] Run: crio config
	I1229 07:49:47.313552  913056 cni.go:84] Creating CNI manager for ""
	I1229 07:49:47.313579  913056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:49:47.313599  913056 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:49:47.313623  913056 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-122604 NodeName:force-systemd-flag-122604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:49:47.313746  913056 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-122604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:49:47.313818  913056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:49:47.322877  913056 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:49:47.322959  913056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:49:47.331120  913056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1229 07:49:47.344904  913056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:49:47.358841  913056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1229 07:49:47.372273  913056 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:49:47.376310  913056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:49:47.386738  913056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:49:47.500574  913056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:49:47.517197  913056 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604 for IP: 192.168.76.2
	I1229 07:49:47.517261  913056 certs.go:195] generating shared ca certs ...
	I1229 07:49:47.517293  913056 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:47.517486  913056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:49:47.517559  913056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:49:47.517595  913056 certs.go:257] generating profile certs ...
	I1229 07:49:47.517671  913056 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/client.key
	I1229 07:49:47.517707  913056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/client.crt with IP's: []
	I1229 07:49:47.816408  913056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/client.crt ...
	I1229 07:49:47.816441  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/client.crt: {Name:mk33584c9debe594af3bebb9cb8258d1031bb064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:47.816639  913056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/client.key ...
	I1229 07:49:47.816653  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/client.key: {Name:mk8e0a0e3fdd19f8261ca7bb26bd741f3e0ca42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:47.816751  913056 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key.4739ec82
	I1229 07:49:47.816770  913056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt.4739ec82 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1229 07:49:48.089355  913056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt.4739ec82 ...
	I1229 07:49:48.089390  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt.4739ec82: {Name:mk08aa586a105be4dae2eb603280608008f5a395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:48.089585  913056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key.4739ec82 ...
	I1229 07:49:48.089601  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key.4739ec82: {Name:mk36e94961f60c3ee70d884be8575a3a8ec332f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:48.089684  913056 certs.go:382] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt.4739ec82 -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt
	I1229 07:49:48.089773  913056 certs.go:386] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key.4739ec82 -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key
	I1229 07:49:48.089835  913056 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.key
	I1229 07:49:48.089854  913056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.crt with IP's: []
	I1229 07:49:48.293637  913056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.crt ...
	I1229 07:49:48.293669  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.crt: {Name:mk5683a320bd98514c9c3e4cbb9c75305f72f097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:48.293851  913056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.key ...
	I1229 07:49:48.293865  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.key: {Name:mk1c68b6163de22fce3f17027fca250f69290ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:48.293956  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:49:48.293978  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:49:48.293993  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:49:48.294009  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:49:48.294020  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 07:49:48.294036  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 07:49:48.294053  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 07:49:48.294063  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 07:49:48.294119  913056 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:49:48.294161  913056 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:49:48.294176  913056 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:49:48.294201  913056 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:49:48.294230  913056 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:49:48.294261  913056 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:49:48.294312  913056 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:49:48.294346  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:49:48.294361  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem -> /usr/share/ca-certificates/741854.pem
	I1229 07:49:48.294372  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> /usr/share/ca-certificates/7418542.pem
	I1229 07:49:48.294886  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:49:48.314800  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:49:48.332474  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:49:48.351130  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:49:48.369256  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1229 07:49:48.387101  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:49:48.404617  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:49:48.421876  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:49:48.439705  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:49:48.457532  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:49:48.474996  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:49:48.492037  913056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:49:48.504947  913056 ssh_runner.go:195] Run: openssl version
	I1229 07:49:48.511278  913056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:49:48.519120  913056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:49:48.526746  913056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:49:48.530522  913056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:49:48.530594  913056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:49:48.571311  913056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:49:48.578901  913056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7418542.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:49:48.586388  913056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:49:48.593903  913056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:49:48.601189  913056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:49:48.604848  913056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:49:48.604915  913056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:49:48.645906  913056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:49:48.653539  913056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:49:48.661174  913056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:49:48.668565  913056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:49:48.676484  913056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:49:48.680530  913056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:49:48.680603  913056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:49:48.722648  913056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:49:48.730502  913056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/741854.pem /etc/ssl/certs/51391683.0
	I1229 07:49:48.738361  913056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:49:48.742990  913056 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:49:48.743095  913056 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-122604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-122604 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:49:48.743208  913056 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:49:48.743297  913056 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:49:48.777301  913056 cri.go:96] found id: ""
	I1229 07:49:48.777422  913056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:49:48.785633  913056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:49:48.793430  913056 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:49:48.793529  913056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:49:48.801370  913056 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:49:48.801391  913056 kubeadm.go:158] found existing configuration files:
	
	I1229 07:49:48.801452  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:49:48.808885  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:49:48.808995  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:49:48.816114  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:49:48.823981  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:49:48.824054  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:49:48.831607  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:49:48.839611  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:49:48.839682  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:49:48.847644  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:49:48.855825  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:49:48.855901  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:49:48.863605  913056 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:49:48.902235  913056 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:49:48.902300  913056 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:49:49.010225  913056 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:49:49.010308  913056 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:49:49.010350  913056 kubeadm.go:319] OS: Linux
	I1229 07:49:49.010406  913056 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:49:49.010460  913056 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:49:49.010511  913056 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:49:49.010563  913056 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:49:49.010614  913056 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:49:49.010666  913056 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:49:49.010714  913056 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:49:49.010765  913056 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:49:49.010814  913056 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:49:49.084691  913056 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:49:49.084909  913056 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:49:49.085048  913056 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:49:49.093287  913056 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:49:49.099851  913056 out.go:252]   - Generating certificates and keys ...
	I1229 07:49:49.099943  913056 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:49:49.100020  913056 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:49:49.290759  913056 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:49:49.601968  913056 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:49:49.748913  913056 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:49:50.030390  913056 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:49:50.398140  913056 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:49:50.398583  913056 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-122604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:49:50.566597  913056 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:49:50.567117  913056 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-122604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:49:50.731085  913056 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:49:51.038703  913056 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:49:51.307597  913056 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:49:51.307961  913056 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:49:51.600251  913056 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:49:51.661463  913056 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:49:51.780528  913056 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:49:51.918077  913056 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:49:52.030154  913056 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:49:52.030922  913056 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:49:52.033728  913056 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:49:52.037272  913056 out.go:252]   - Booting up control plane ...
	I1229 07:49:52.037376  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:49:52.037450  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:49:52.039203  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:49:52.055715  913056 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:49:52.055963  913056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:49:52.064098  913056 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:49:52.064606  913056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:49:52.064715  913056 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:49:52.200744  913056 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:49:52.200998  913056 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:53:52.201509  913056 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000842749s
	I1229 07:53:52.201542  913056 kubeadm.go:319] 
	I1229 07:53:52.201600  913056 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:53:52.201638  913056 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:53:52.201746  913056 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:53:52.201755  913056 kubeadm.go:319] 
	I1229 07:53:52.201859  913056 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:53:52.201895  913056 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:53:52.201930  913056 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:53:52.201938  913056 kubeadm.go:319] 
	I1229 07:53:52.208144  913056 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:53:52.208619  913056 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:53:52.208743  913056 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:53:52.209029  913056 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1229 07:53:52.209041  913056 kubeadm.go:319] 
	I1229 07:53:52.209109  913056 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1229 07:53:52.209252  913056 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-122604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-122604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000842749s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-122604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-122604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000842749s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:53:52.209331  913056 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1229 07:53:52.622384  913056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:53:52.635442  913056 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:53:52.635506  913056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:53:52.644553  913056 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:53:52.644574  913056 kubeadm.go:158] found existing configuration files:
	
	I1229 07:53:52.644655  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:53:52.652636  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:53:52.652702  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:53:52.660083  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:53:52.668000  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:53:52.668064  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:53:52.675501  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:53:52.683499  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:53:52.683585  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:53:52.691378  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:53:52.699077  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:53:52.699141  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:53:52.706928  913056 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:53:52.743355  913056 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:53:52.743680  913056 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:53:52.820188  913056 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:53:52.820265  913056 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:53:52.820300  913056 kubeadm.go:319] OS: Linux
	I1229 07:53:52.820355  913056 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:53:52.820409  913056 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:53:52.820460  913056 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:53:52.820512  913056 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:53:52.820563  913056 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:53:52.820616  913056 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:53:52.820665  913056 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:53:52.820716  913056 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:53:52.820764  913056 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:53:52.898967  913056 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:53:52.899087  913056 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:53:52.899183  913056 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:53:52.909202  913056 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:53:52.915184  913056 out.go:252]   - Generating certificates and keys ...
	I1229 07:53:52.915280  913056 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:53:52.915366  913056 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:53:52.915444  913056 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 07:53:52.915505  913056 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1229 07:53:52.915575  913056 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1229 07:53:52.915628  913056 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1229 07:53:52.915691  913056 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1229 07:53:52.915753  913056 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1229 07:53:52.915833  913056 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 07:53:52.915906  913056 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 07:53:52.915943  913056 kubeadm.go:319] [certs] Using the existing "sa" key
	I1229 07:53:52.916000  913056 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:53:53.163034  913056 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:53:53.260276  913056 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:53:53.509146  913056 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:53:53.923256  913056 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:53:54.338003  913056 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:53:54.338712  913056 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:53:54.341424  913056 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:53:54.344645  913056 out.go:252]   - Booting up control plane ...
	I1229 07:53:54.344752  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:53:54.344850  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:53:54.344919  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:53:54.362719  913056 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:53:54.362842  913056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:53:54.370734  913056 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:53:54.371185  913056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:53:54.371512  913056 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:53:54.549211  913056 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:53:54.549333  913056 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:57:54.550336  913056 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001438898s
	I1229 07:57:54.550655  913056 kubeadm.go:319] 
	I1229 07:57:54.550727  913056 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:57:54.550768  913056 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:57:54.550871  913056 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:57:54.550879  913056 kubeadm.go:319] 
	I1229 07:57:54.550977  913056 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:57:54.551010  913056 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:57:54.551042  913056 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:57:54.551049  913056 kubeadm.go:319] 
	I1229 07:57:54.555412  913056 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:57:54.555837  913056 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:57:54.555958  913056 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:57:54.556227  913056 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1229 07:57:54.556237  913056 kubeadm.go:319] 
	I1229 07:57:54.556305  913056 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:57:54.556358  913056 kubeadm.go:403] duration metric: took 8m5.813268978s to StartCluster
	I1229 07:57:54.556394  913056 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:57:54.556458  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:57:54.612243  913056 cri.go:96] found id: ""
	I1229 07:57:54.612274  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.612283  913056 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:57:54.612290  913056 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:57:54.612348  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:57:54.645699  913056 cri.go:96] found id: ""
	I1229 07:57:54.645720  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.645728  913056 logs.go:284] No container was found matching "etcd"
	I1229 07:57:54.645734  913056 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:57:54.645796  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:57:54.683214  913056 cri.go:96] found id: ""
	I1229 07:57:54.683237  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.683246  913056 logs.go:284] No container was found matching "coredns"
	I1229 07:57:54.683252  913056 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:57:54.683309  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:57:54.727409  913056 cri.go:96] found id: ""
	I1229 07:57:54.727431  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.727440  913056 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:57:54.727447  913056 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:57:54.727503  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:57:54.757950  913056 cri.go:96] found id: ""
	I1229 07:57:54.757971  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.757980  913056 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:57:54.757986  913056 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:57:54.758044  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:57:54.785787  913056 cri.go:96] found id: ""
	I1229 07:57:54.785809  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.785818  913056 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:57:54.785824  913056 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:57:54.785884  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:57:54.814203  913056 cri.go:96] found id: ""
	I1229 07:57:54.814224  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.814232  913056 logs.go:284] No container was found matching "kindnet"
	I1229 07:57:54.814242  913056 logs.go:123] Gathering logs for dmesg ...
	I1229 07:57:54.814254  913056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:57:54.830942  913056 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:57:54.831101  913056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:57:54.978517  913056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:57:54.970778    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.971495    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.972582    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.973250    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.974772    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1229 07:57:54.970778    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.971495    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.972582    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.973250    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.974772    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:57:54.978591  913056 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:57:54.978628  913056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:57:55.014450  913056 logs.go:123] Gathering logs for container status ...
	I1229 07:57:55.014494  913056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:57:55.060024  913056 logs.go:123] Gathering logs for kubelet ...
	I1229 07:57:55.060052  913056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1229 07:57:55.154919  913056 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001438898s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:57:55.155036  913056 out.go:285] * 
	* 
	W1229 07:57:55.155269  913056 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001438898s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001438898s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:57:55.155336  913056 out.go:285] * 
	* 
	W1229 07:57:55.155627  913056 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:57:55.160728  913056 out.go:203] 
	W1229 07:57:55.164773  913056 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001438898s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001438898s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:57:55.165085  913056 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1229 07:57:55.165155  913056 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1229 07:57:55.169973  913056 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-122604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-122604 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-29 07:57:55.692003399 +0000 UTC m=+2916.998960996
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-122604
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-122604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac0ede36bcaf15cba1cff755a91e01c7a2711237262ab91ce4d2786e16774ba6",
	        "Created": "2025-12-29T07:49:40.165150159Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 913488,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:49:40.230933377Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/ac0ede36bcaf15cba1cff755a91e01c7a2711237262ab91ce4d2786e16774ba6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac0ede36bcaf15cba1cff755a91e01c7a2711237262ab91ce4d2786e16774ba6/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac0ede36bcaf15cba1cff755a91e01c7a2711237262ab91ce4d2786e16774ba6/hosts",
	        "LogPath": "/var/lib/docker/containers/ac0ede36bcaf15cba1cff755a91e01c7a2711237262ab91ce4d2786e16774ba6/ac0ede36bcaf15cba1cff755a91e01c7a2711237262ab91ce4d2786e16774ba6-json.log",
	        "Name": "/force-systemd-flag-122604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-122604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-122604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ac0ede36bcaf15cba1cff755a91e01c7a2711237262ab91ce4d2786e16774ba6",
	                "LowerDir": "/var/lib/docker/overlay2/27c5f264069399d0bfc91a32ac95cb79b57dfbb76530248bacab7c3906026a46-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/27c5f264069399d0bfc91a32ac95cb79b57dfbb76530248bacab7c3906026a46/merged",
	                "UpperDir": "/var/lib/docker/overlay2/27c5f264069399d0bfc91a32ac95cb79b57dfbb76530248bacab7c3906026a46/diff",
	                "WorkDir": "/var/lib/docker/overlay2/27c5f264069399d0bfc91a32ac95cb79b57dfbb76530248bacab7c3906026a46/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-122604",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-122604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-122604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-122604",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-122604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e39167e5b071b7aeea3417c61d54dbd373dc9abb350a4d0bd225094c00dd6199",
	            "SandboxKey": "/var/run/docker/netns/e39167e5b071",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-122604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:40:61:b6:50:48",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "714d1dc9bdce84263cc3960c85a151c66268d99b415ccad4642efa51749b9de0",
	                    "EndpointID": "120a9291d8bf01b25f2f13fb5681d43c039add960a0a323cf93c788a867b9c97",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-122604",
	                        "ac0ede36bcaf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-122604 -n force-systemd-flag-122604
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-122604 -n force-systemd-flag-122604: exit status 6 (423.547279ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:57:56.126933  942199 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-122604" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-122604 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cert-options-600463                                                                                                                                                                                                                        │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-512867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │                     │
	│ stop    │ -p old-k8s-version-512867 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │ 29 Dec 25 07:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-512867 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ image   │ old-k8s-version-512867 image list --format=json                                                                                                                                                                                               │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ pause   │ -p old-k8s-version-512867 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │                     │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-822299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │                     │
	│ stop    │ -p no-preload-822299 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable dashboard -p no-preload-822299 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:56 UTC │
	│ image   │ no-preload-822299 image list --format=json                                                                                                                                                                                                    │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ pause   │ -p no-preload-822299 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │                     │
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524        │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-664524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-664524        │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │                     │
	│ stop    │ -p embed-certs-664524 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-664524        │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-664524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-664524        │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524        │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │                     │
	│ ssh     │ force-systemd-flag-122604 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-122604 │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:57:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:57:45.440077  940463 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:57:45.440507  940463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:57:45.440542  940463 out.go:374] Setting ErrFile to fd 2...
	I1229 07:57:45.440562  940463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:57:45.441124  940463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:57:45.441687  940463 out.go:368] Setting JSON to false
	I1229 07:57:45.442713  940463 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16817,"bootTime":1766978248,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:57:45.442839  940463 start.go:143] virtualization:  
	I1229 07:57:45.446047  940463 out.go:179] * [embed-certs-664524] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:57:45.455359  940463 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:57:45.458627  940463 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:57:45.459006  940463 notify.go:221] Checking for updates...
	I1229 07:57:45.464702  940463 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:57:45.467736  940463 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:57:45.470723  940463 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:57:45.473775  940463 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:57:45.478512  940463 config.go:182] Loaded profile config "embed-certs-664524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:57:45.479210  940463 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:57:45.516958  940463 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:57:45.517069  940463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:57:45.576165  940463 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:57:45.566855419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:57:45.576278  940463 docker.go:319] overlay module found
	I1229 07:57:45.579375  940463 out.go:179] * Using the docker driver based on existing profile
	I1229 07:57:45.582133  940463 start.go:309] selected driver: docker
	I1229 07:57:45.582157  940463 start.go:928] validating driver "docker" against &{Name:embed-certs-664524 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-664524 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:57:45.582251  940463 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:57:45.582981  940463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:57:45.646565  940463 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:57:45.637304558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:57:45.646888  940463 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:57:45.646927  940463 cni.go:84] Creating CNI manager for ""
	I1229 07:57:45.646994  940463 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:57:45.647036  940463 start.go:353] cluster config:
	{Name:embed-certs-664524 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-664524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:57:45.651983  940463 out.go:179] * Starting "embed-certs-664524" primary control-plane node in "embed-certs-664524" cluster
	I1229 07:57:45.654782  940463 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:57:45.657653  940463 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:57:45.660571  940463 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:57:45.660632  940463 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:57:45.660646  940463 cache.go:65] Caching tarball of preloaded images
	I1229 07:57:45.660655  940463 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:57:45.660734  940463 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:57:45.660744  940463 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:57:45.661422  940463 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/config.json ...
	I1229 07:57:45.678836  940463 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:57:45.678860  940463 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:57:45.678883  940463 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:57:45.678918  940463 start.go:360] acquireMachinesLock for embed-certs-664524: {Name:mk3e68f4ad1434f78982be79fcb8ab8f3f5e9bab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:57:45.678986  940463 start.go:364] duration metric: took 46.023µs to acquireMachinesLock for "embed-certs-664524"
	I1229 07:57:45.679009  940463 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:57:45.679017  940463 fix.go:54] fixHost starting: 
	I1229 07:57:45.679288  940463 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Status}}
	I1229 07:57:45.695560  940463 fix.go:112] recreateIfNeeded on embed-certs-664524: state=Stopped err=<nil>
	W1229 07:57:45.695590  940463 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:57:45.698820  940463 out.go:252] * Restarting existing docker container for "embed-certs-664524" ...
	I1229 07:57:45.698904  940463 cli_runner.go:164] Run: docker start embed-certs-664524
	I1229 07:57:45.954227  940463 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Status}}
	I1229 07:57:45.978021  940463 kic.go:430] container "embed-certs-664524" state is running.
	I1229 07:57:45.978415  940463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-664524
	I1229 07:57:46.001138  940463 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/config.json ...
	I1229 07:57:46.001966  940463 machine.go:94] provisionDockerMachine start ...
	I1229 07:57:46.002046  940463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:46.029029  940463 main.go:144] libmachine: Using SSH client type: native
	I1229 07:57:46.029364  940463 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33838 <nil> <nil>}
	I1229 07:57:46.029374  940463 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:57:46.031030  940463 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1229 07:57:49.184648  940463 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-664524
	
	I1229 07:57:49.184676  940463 ubuntu.go:182] provisioning hostname "embed-certs-664524"
	I1229 07:57:49.184753  940463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:49.204010  940463 main.go:144] libmachine: Using SSH client type: native
	I1229 07:57:49.204347  940463 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33838 <nil> <nil>}
	I1229 07:57:49.204360  940463 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-664524 && echo "embed-certs-664524" | sudo tee /etc/hostname
	I1229 07:57:49.366171  940463 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-664524
	
	I1229 07:57:49.366257  940463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:49.385033  940463 main.go:144] libmachine: Using SSH client type: native
	I1229 07:57:49.385362  940463 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33838 <nil> <nil>}
	I1229 07:57:49.385385  940463 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-664524' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-664524/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-664524' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:57:49.537013  940463 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:57:49.537041  940463 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:57:49.537102  940463 ubuntu.go:190] setting up certificates
	I1229 07:57:49.537113  940463 provision.go:84] configureAuth start
	I1229 07:57:49.537197  940463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-664524
	I1229 07:57:49.560183  940463 provision.go:143] copyHostCerts
	I1229 07:57:49.560260  940463 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:57:49.560281  940463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:57:49.560358  940463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:57:49.560456  940463 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:57:49.560468  940463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:57:49.560494  940463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:57:49.560551  940463 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:57:49.560560  940463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:57:49.560585  940463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:57:49.560635  940463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.embed-certs-664524 san=[127.0.0.1 192.168.85.2 embed-certs-664524 localhost minikube]
	I1229 07:57:49.694292  940463 provision.go:177] copyRemoteCerts
	I1229 07:57:49.694370  940463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:57:49.694410  940463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:49.711514  940463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33838 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:57:49.816920  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:57:49.835179  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1229 07:57:49.853129  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:57:49.871915  940463 provision.go:87] duration metric: took 334.773653ms to configureAuth
	I1229 07:57:49.871987  940463 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:57:49.872218  940463 config.go:182] Loaded profile config "embed-certs-664524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:57:49.872328  940463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:49.889449  940463 main.go:144] libmachine: Using SSH client type: native
	I1229 07:57:49.889782  940463 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33838 <nil> <nil>}
	I1229 07:57:49.889802  940463 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:57:50.285195  940463 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:57:50.285221  940463 machine.go:97] duration metric: took 4.283233305s to provisionDockerMachine
	I1229 07:57:50.285233  940463 start.go:293] postStartSetup for "embed-certs-664524" (driver="docker")
	I1229 07:57:50.285259  940463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:57:50.285353  940463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:57:50.285395  940463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:50.310151  940463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33838 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:57:50.417349  940463 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:57:50.421111  940463 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:57:50.421144  940463 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:57:50.421156  940463 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:57:50.421214  940463 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:57:50.421319  940463 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:57:50.421442  940463 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:57:50.428841  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:57:50.447602  940463 start.go:296] duration metric: took 162.352468ms for postStartSetup
	I1229 07:57:50.447684  940463 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:57:50.447738  940463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:50.470297  940463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33838 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:57:50.578044  940463 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:57:50.582924  940463 fix.go:56] duration metric: took 4.90389845s for fixHost
	I1229 07:57:50.582952  940463 start.go:83] releasing machines lock for "embed-certs-664524", held for 4.903953073s
	I1229 07:57:50.583020  940463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-664524
	I1229 07:57:50.603815  940463 ssh_runner.go:195] Run: cat /version.json
	I1229 07:57:50.603843  940463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:57:50.603880  940463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:50.603897  940463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:50.623550  940463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33838 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:57:50.625092  940463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33838 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:57:50.728985  940463 ssh_runner.go:195] Run: systemctl --version
	I1229 07:57:50.820975  940463 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:57:50.862108  940463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:57:50.866459  940463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:57:50.866557  940463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:57:50.874790  940463 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:57:50.874871  940463 start.go:496] detecting cgroup driver to use...
	I1229 07:57:50.874930  940463 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:57:50.875004  940463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:57:50.890021  940463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:57:50.903266  940463 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:57:50.903386  940463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:57:50.919288  940463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:57:50.932493  940463 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:57:51.056085  940463 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:57:51.174409  940463 docker.go:234] disabling docker service ...
	I1229 07:57:51.174475  940463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:57:51.189922  940463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:57:51.203863  940463 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:57:51.330671  940463 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:57:51.454897  940463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:57:51.469858  940463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:57:51.486003  940463 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:57:51.486072  940463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:57:51.495387  940463 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:57:51.495473  940463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:57:51.505235  940463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:57:51.515402  940463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:57:51.525392  940463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:57:51.538547  940463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:57:51.547933  940463 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:57:51.556735  940463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:57:51.565776  940463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:57:51.573713  940463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:57:51.582170  940463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:57:51.696956  940463 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:57:51.888877  940463 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:57:51.889061  940463 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:57:51.893464  940463 start.go:574] Will wait 60s for crictl version
	I1229 07:57:51.893532  940463 ssh_runner.go:195] Run: which crictl
	I1229 07:57:51.897179  940463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:57:51.931847  940463 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:57:51.931944  940463 ssh_runner.go:195] Run: crio --version
	I1229 07:57:51.964865  940463 ssh_runner.go:195] Run: crio --version
	I1229 07:57:52.000565  940463 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:57:52.003827  940463 cli_runner.go:164] Run: docker network inspect embed-certs-664524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:57:52.021973  940463 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:57:52.026282  940463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:57:52.037072  940463 kubeadm.go:884] updating cluster {Name:embed-certs-664524 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-664524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:57:52.037197  940463 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:57:52.037257  940463 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:57:52.072512  940463 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:57:52.072540  940463 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:57:52.072599  940463 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:57:52.105946  940463 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:57:52.105971  940463 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:57:52.105979  940463 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1229 07:57:52.106108  940463 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-664524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-664524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:57:52.106195  940463 ssh_runner.go:195] Run: crio config
	I1229 07:57:52.165111  940463 cni.go:84] Creating CNI manager for ""
	I1229 07:57:52.165176  940463 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:57:52.165204  940463 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:57:52.165229  940463 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-664524 NodeName:embed-certs-664524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:57:52.165422  940463 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-664524"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:57:52.165502  940463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:57:52.173478  940463 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:57:52.173582  940463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:57:52.181342  940463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1229 07:57:52.198275  940463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:57:52.216380  940463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1229 07:57:52.234468  940463 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:57:52.238656  940463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:57:52.262813  940463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:57:52.382149  940463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:57:52.398730  940463 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524 for IP: 192.168.85.2
	I1229 07:57:52.398755  940463 certs.go:195] generating shared ca certs ...
	I1229 07:57:52.398771  940463 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:57:52.398926  940463 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:57:52.398972  940463 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:57:52.398984  940463 certs.go:257] generating profile certs ...
	I1229 07:57:52.399081  940463 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/client.key
	I1229 07:57:52.399154  940463 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.key.b587eace
	I1229 07:57:52.399201  940463 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/proxy-client.key
	I1229 07:57:52.399319  940463 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:57:52.399356  940463 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:57:52.399381  940463 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:57:52.399419  940463 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:57:52.399451  940463 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:57:52.399483  940463 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:57:52.399533  940463 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:57:52.400184  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:57:52.418534  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:57:52.437288  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:57:52.456049  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:57:52.475702  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1229 07:57:52.502453  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:57:52.535727  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:57:52.556341  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:57:52.576206  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:57:52.599341  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:57:52.625647  940463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:57:52.644378  940463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:57:52.664319  940463 ssh_runner.go:195] Run: openssl version
	I1229 07:57:52.670913  940463 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:57:52.679269  940463 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:57:52.687589  940463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:57:52.691581  940463 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:57:52.691692  940463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:57:52.734154  940463 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:57:52.742058  940463 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:57:52.749719  940463 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:57:52.757061  940463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:57:52.760952  940463 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:57:52.761020  940463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:57:52.802012  940463 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:57:52.809311  940463 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:57:52.816943  940463 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:57:52.824505  940463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:57:52.828409  940463 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:57:52.828489  940463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:57:52.874502  940463 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:57:52.882071  940463 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:57:52.886016  940463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:57:52.927132  940463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:57:52.973300  940463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:57:53.054146  940463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:57:53.107167  940463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:57:53.191370  940463 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:57:53.262006  940463 kubeadm.go:401] StartCluster: {Name:embed-certs-664524 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-664524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:57:53.262191  940463 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:57:53.262327  940463 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:57:53.311649  940463 cri.go:96] found id: "654e52c22fa70936846b530c44f221ec218a752ca1cc7140e9422de0ec6628d3"
	I1229 07:57:53.311720  940463 cri.go:96] found id: "d33e8a566872e68896c0f3f3ed489834fbd96480e02e43b56550ca6df569cf2e"
	I1229 07:57:53.311738  940463 cri.go:96] found id: "f4bd112a2b0e2eb9aca6d9b1d6032fd1d77abfb043c5af20060885edb1a04b53"
	I1229 07:57:53.311762  940463 cri.go:96] found id: "7de23a46010383a4bcead63ecc65f9f99043b4ec3bb3d680e0839a2c53de7654"
	I1229 07:57:53.311803  940463 cri.go:96] found id: ""
	I1229 07:57:53.311895  940463 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:57:53.332504  940463 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:57:53Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:57:53.332625  940463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:57:53.347027  940463 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:57:53.347094  940463 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:57:53.347193  940463 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:57:53.355128  940463 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:57:53.355646  940463 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-664524" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:57:53.355832  940463 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-739997/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-664524" cluster setting kubeconfig missing "embed-certs-664524" context setting]
	I1229 07:57:53.356219  940463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:57:53.357745  940463 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:57:53.370489  940463 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1229 07:57:53.370572  940463 kubeadm.go:602] duration metric: took 23.458347ms to restartPrimaryControlPlane
	I1229 07:57:53.370597  940463 kubeadm.go:403] duration metric: took 108.612794ms to StartCluster
	I1229 07:57:53.370642  940463 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:57:53.370737  940463 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:57:53.376047  940463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:57:53.376406  940463 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:57:53.379794  940463 config.go:182] Loaded profile config "embed-certs-664524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:57:53.379841  940463 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:57:53.379969  940463 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-664524"
	I1229 07:57:53.379982  940463 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-664524"
	W1229 07:57:53.379989  940463 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:57:53.380011  940463 host.go:66] Checking if "embed-certs-664524" exists ...
	I1229 07:57:53.380555  940463 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Status}}
	I1229 07:57:53.380956  940463 addons.go:70] Setting dashboard=true in profile "embed-certs-664524"
	I1229 07:57:53.380974  940463 addons.go:239] Setting addon dashboard=true in "embed-certs-664524"
	W1229 07:57:53.380987  940463 addons.go:248] addon dashboard should already be in state true
	I1229 07:57:53.381012  940463 host.go:66] Checking if "embed-certs-664524" exists ...
	I1229 07:57:53.381516  940463 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Status}}
	I1229 07:57:53.381911  940463 addons.go:70] Setting default-storageclass=true in profile "embed-certs-664524"
	I1229 07:57:53.381935  940463 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-664524"
	I1229 07:57:53.382198  940463 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Status}}
	I1229 07:57:53.384729  940463 out.go:179] * Verifying Kubernetes components...
	I1229 07:57:53.388094  940463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:57:53.433871  940463 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:57:53.433987  940463 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:57:53.437744  940463 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1229 07:57:54.550336  913056 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001438898s
	I1229 07:57:54.550655  913056 kubeadm.go:319] 
	I1229 07:57:54.550727  913056 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:57:54.550768  913056 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:57:54.550871  913056 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:57:54.550879  913056 kubeadm.go:319] 
	I1229 07:57:54.550977  913056 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:57:54.551010  913056 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:57:54.551042  913056 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:57:54.551049  913056 kubeadm.go:319] 
	I1229 07:57:54.555412  913056 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:57:54.555837  913056 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:57:54.555958  913056 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:57:54.556227  913056 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1229 07:57:54.556237  913056 kubeadm.go:319] 
	I1229 07:57:54.556305  913056 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:57:54.556358  913056 kubeadm.go:403] duration metric: took 8m5.813268978s to StartCluster
	I1229 07:57:54.556394  913056 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:57:54.556458  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:57:54.612243  913056 cri.go:96] found id: ""
	I1229 07:57:54.612274  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.612283  913056 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:57:54.612290  913056 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:57:54.612348  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:57:54.645699  913056 cri.go:96] found id: ""
	I1229 07:57:54.645720  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.645728  913056 logs.go:284] No container was found matching "etcd"
	I1229 07:57:54.645734  913056 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:57:54.645796  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:57:54.683214  913056 cri.go:96] found id: ""
	I1229 07:57:54.683237  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.683246  913056 logs.go:284] No container was found matching "coredns"
	I1229 07:57:54.683252  913056 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:57:54.683309  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:57:54.727409  913056 cri.go:96] found id: ""
	I1229 07:57:54.727431  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.727440  913056 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:57:54.727447  913056 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:57:54.727503  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:57:54.757950  913056 cri.go:96] found id: ""
	I1229 07:57:54.757971  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.757980  913056 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:57:54.757986  913056 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:57:54.758044  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:57:54.785787  913056 cri.go:96] found id: ""
	I1229 07:57:54.785809  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.785818  913056 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:57:54.785824  913056 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:57:54.785884  913056 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:57:54.814203  913056 cri.go:96] found id: ""
	I1229 07:57:54.814224  913056 logs.go:282] 0 containers: []
	W1229 07:57:54.814232  913056 logs.go:284] No container was found matching "kindnet"
	I1229 07:57:54.814242  913056 logs.go:123] Gathering logs for dmesg ...
	I1229 07:57:54.814254  913056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:57:54.830942  913056 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:57:54.831101  913056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:57:54.978517  913056 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:57:54.970778    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.971495    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.972582    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.973250    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.974772    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1229 07:57:54.970778    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.971495    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.972582    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.973250    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:54.974772    4878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:57:54.978591  913056 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:57:54.978628  913056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:57:55.014450  913056 logs.go:123] Gathering logs for container status ...
	I1229 07:57:55.014494  913056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:57:55.060024  913056 logs.go:123] Gathering logs for kubelet ...
	I1229 07:57:55.060052  913056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1229 07:57:55.154919  913056 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001438898s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:57:55.155036  913056 out.go:285] * 
	W1229 07:57:55.155269  913056 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001438898s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:57:55.155336  913056 out.go:285] * 
	W1229 07:57:55.155627  913056 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:57:55.160728  913056 out.go:203] 
	W1229 07:57:55.164773  913056 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001438898s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:57:55.165085  913056 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1229 07:57:55.165155  913056 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1229 07:57:55.169973  913056 out.go:203] 
	I1229 07:57:53.437972  940463 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:57:53.437989  940463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:57:53.438051  940463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:53.444820  940463 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:57:53.444886  940463 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:57:53.444960  940463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:53.446947  940463 addons.go:239] Setting addon default-storageclass=true in "embed-certs-664524"
	W1229 07:57:53.446976  940463 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:57:53.446999  940463 host.go:66] Checking if "embed-certs-664524" exists ...
	I1229 07:57:53.447428  940463 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Status}}
	I1229 07:57:53.490563  940463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33838 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:57:53.503526  940463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33838 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:57:53.513146  940463 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:57:53.513170  940463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:57:53.513245  940463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:53.544884  940463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33838 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:57:53.767387  940463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:57:53.795581  940463 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:57:53.795608  940463 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:57:53.812432  940463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:57:53.821824  940463 node_ready.go:35] waiting up to 6m0s for node "embed-certs-664524" to be "Ready" ...
	I1229 07:57:53.852660  940463 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:57:53.852688  940463 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:57:53.889352  940463 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:57:53.889387  940463 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:57:53.947570  940463 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:57:53.947598  940463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:57:53.949811  940463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:57:53.986814  940463 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:57:53.986842  940463 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1229 07:57:54.038011  940463 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:57:54.038052  940463 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:57:54.079768  940463 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:57:54.079813  940463 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:57:54.118993  940463 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:57:54.119034  940463 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:57:54.132653  940463 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:57:54.132707  940463 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:57:54.147768  940463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Dec 29 07:49:47 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:47.006744627Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 29 07:49:47 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:47.007166999Z" level=info msg="Starting seccomp notifier watcher"
	Dec 29 07:49:47 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:47.007275956Z" level=info msg="Create NRI interface"
	Dec 29 07:49:47 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:47.007406853Z" level=info msg="built-in NRI default validator is disabled"
	Dec 29 07:49:47 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:47.007418685Z" level=info msg="runtime interface created"
	Dec 29 07:49:47 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:47.007431609Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 29 07:49:47 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:47.007438952Z" level=info msg="runtime interface starting up..."
	Dec 29 07:49:47 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:47.007447133Z" level=info msg="starting plugins..."
	Dec 29 07:49:47 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:47.007464142Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 29 07:49:47 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:47.007549846Z" level=info msg="No systemd watchdog enabled"
	Dec 29 07:49:47 force-systemd-flag-122604 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 29 07:49:49 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:49.088020898Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=0074288a-d05d-44f5-aab3-d78b35f4b62d name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:49:49 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:49.088740635Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=20699061-3ee2-433c-82a4-3511e24068f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:49:49 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:49.089277806Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=681d0ac4-ab18-4cce-9126-f2f708241c5c name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:49:49 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:49.08969843Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=936bb5b5-f446-4da0-9e3c-3acf859ecbae name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:49:49 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:49.090148314Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=560e6f3b-5d6b-413e-a46c-d6b8a7e9357e name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:49:49 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:49.090567748Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=2e714ac0-1dfd-4fc6-ae93-bc43b6a58944 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:49:49 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:49:49.09102249Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=3f1b86f5-a5ba-4f23-bdc8-11082d864358 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:53:52 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:53:52.902230127Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=0b134e3a-9fec-4eb8-8997-ab672a5dc2d0 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:53:52 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:53:52.903082046Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=dee1b62a-afd9-4c73-83f1-580c385ad8cc name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:53:52 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:53:52.903624506Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=c05b6ad5-2a9e-42f3-8107-0ae3f2062f20 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:53:52 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:53:52.904099406Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=f0085530-caa2-4dab-88a8-6e4132c228be name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:53:52 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:53:52.904582592Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=d19341a5-fee7-46ee-8d8d-4c87cc66a6eb name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:53:52 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:53:52.905011649Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f83b0e5d-0151-4915-8c3b-a0ee14566557 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:53:52 force-systemd-flag-122604 crio[841]: time="2025-12-29T07:53:52.905427906Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=b414d056-775c-455c-ad7f-5aa2fe186c7e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:57:56.904027    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:56.913495    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:56.915309    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:56.915974    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:57:56.917659    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec29 07:30] overlayfs: idmapped layers are currently not supported
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	[Dec29 07:54] overlayfs: idmapped layers are currently not supported
	[Dec29 07:55] overlayfs: idmapped layers are currently not supported
	[Dec29 07:56] overlayfs: idmapped layers are currently not supported
	[Dec29 07:57] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:57:56 up  4:40,  0 user,  load average: 1.24, 1.67, 2.14
	Linux force-systemd-flag-122604 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 29 07:57:54 force-systemd-flag-122604 kubelet[4816]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 29 07:57:54 force-systemd-flag-122604 kubelet[4816]: E1229 07:57:54.520092    4816 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:57:54 force-systemd-flag-122604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:57:54 force-systemd-flag-122604 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:57:55 force-systemd-flag-122604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 29 07:57:55 force-systemd-flag-122604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:57:55 force-systemd-flag-122604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:57:55 force-systemd-flag-122604 kubelet[4898]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 29 07:57:55 force-systemd-flag-122604 kubelet[4898]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 29 07:57:55 force-systemd-flag-122604 kubelet[4898]: E1229 07:57:55.335300    4898 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:57:55 force-systemd-flag-122604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:57:55 force-systemd-flag-122604 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:57:56 force-systemd-flag-122604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 29 07:57:56 force-systemd-flag-122604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:57:56 force-systemd-flag-122604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:57:56 force-systemd-flag-122604 kubelet[4924]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 29 07:57:56 force-systemd-flag-122604 kubelet[4924]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 29 07:57:56 force-systemd-flag-122604 kubelet[4924]: E1229 07:57:56.116627    4924 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:57:56 force-systemd-flag-122604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:57:56 force-systemd-flag-122604 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:57:56 force-systemd-flag-122604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 29 07:57:56 force-systemd-flag-122604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:57:56 force-systemd-flag-122604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:57:57 force-systemd-flag-122604 kubelet[5018]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 29 07:57:57 force-systemd-flag-122604 kubelet[5018]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-122604 -n force-systemd-flag-122604
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-122604 -n force-systemd-flag-122604: exit status 6 (486.254419ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:57:57.574554  942419 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-122604" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-122604" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-122604" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-122604
E1229 07:57:58.800115  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-122604: (2.375200167s)
--- FAIL: TestForceSystemdFlag (504.78s)

                                                
                                    
x
+
TestForceSystemdEnv (508.48s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-002562 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-002562 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m24.876287958s)

                                                
                                                
-- stdout --
	* [force-systemd-env-002562] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-002562" primary control-plane node in "force-systemd-env-002562" cluster
	* Pulling base image v0.0.48-1766979815-22353 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:42:37.024465  893127 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:42:37.024654  893127 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:42:37.024662  893127 out.go:374] Setting ErrFile to fd 2...
	I1229 07:42:37.024666  893127 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:42:37.024980  893127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:42:37.025516  893127 out.go:368] Setting JSON to false
	I1229 07:42:37.026596  893127 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15909,"bootTime":1766978248,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:42:37.026698  893127 start.go:143] virtualization:  
	I1229 07:42:37.031734  893127 out.go:179] * [force-systemd-env-002562] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:42:37.034880  893127 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:42:37.034940  893127 notify.go:221] Checking for updates...
	I1229 07:42:37.041234  893127 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:42:37.044256  893127 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:42:37.047104  893127 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:42:37.050302  893127 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:42:37.053090  893127 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1229 07:42:37.056584  893127 config.go:182] Loaded profile config "test-preload-759848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:42:37.056678  893127 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:42:37.092329  893127 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:42:37.092445  893127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:42:37.175629  893127 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-29 07:42:37.163901243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:42:37.175728  893127 docker.go:319] overlay module found
	I1229 07:42:37.179315  893127 out.go:179] * Using the docker driver based on user configuration
	I1229 07:42:37.182452  893127 start.go:309] selected driver: docker
	I1229 07:42:37.182474  893127 start.go:928] validating driver "docker" against <nil>
	I1229 07:42:37.182488  893127 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:42:37.183204  893127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:42:37.257120  893127 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-29 07:42:37.247865385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:42:37.257277  893127 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:42:37.257486  893127 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 07:42:37.259745  893127 out.go:179] * Using Docker driver with root privileges
	I1229 07:42:37.266271  893127 cni.go:84] Creating CNI manager for ""
	I1229 07:42:37.266338  893127 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:42:37.266348  893127 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:42:37.266434  893127 start.go:353] cluster config:
	{Name:force-systemd-env-002562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-002562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:42:37.269769  893127 out.go:179] * Starting "force-systemd-env-002562" primary control-plane node in "force-systemd-env-002562" cluster
	I1229 07:42:37.272678  893127 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:42:37.275633  893127 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:42:37.278560  893127 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:42:37.278604  893127 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:42:37.278615  893127 cache.go:65] Caching tarball of preloaded images
	I1229 07:42:37.278639  893127 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:42:37.278733  893127 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:42:37.278742  893127 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:42:37.278870  893127 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/config.json ...
	I1229 07:42:37.278888  893127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/config.json: {Name:mkc0301a1d0ac6054c092eb012757056bcc6944b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:42:37.309696  893127 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:42:37.309715  893127 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:42:37.309797  893127 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:42:37.309832  893127 start.go:360] acquireMachinesLock for force-systemd-env-002562: {Name:mk6effc4023c7573a76ef36fb54b0234124b84ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:42:37.309945  893127 start.go:364] duration metric: took 96.526µs to acquireMachinesLock for "force-systemd-env-002562"
	I1229 07:42:37.309978  893127 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-002562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-002562 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:42:37.310040  893127 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:42:37.313935  893127 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:42:37.314174  893127 start.go:159] libmachine.API.Create for "force-systemd-env-002562" (driver="docker")
	I1229 07:42:37.314203  893127 client.go:173] LocalClient.Create starting
	I1229 07:42:37.314265  893127 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 07:42:37.314297  893127 main.go:144] libmachine: Decoding PEM data...
	I1229 07:42:37.314316  893127 main.go:144] libmachine: Parsing certificate...
	I1229 07:42:37.314365  893127 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 07:42:37.314381  893127 main.go:144] libmachine: Decoding PEM data...
	I1229 07:42:37.314391  893127 main.go:144] libmachine: Parsing certificate...
	I1229 07:42:37.314746  893127 cli_runner.go:164] Run: docker network inspect force-systemd-env-002562 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:42:37.332858  893127 cli_runner.go:211] docker network inspect force-systemd-env-002562 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:42:37.332935  893127 network_create.go:284] running [docker network inspect force-systemd-env-002562] to gather additional debugging logs...
	I1229 07:42:37.332952  893127 cli_runner.go:164] Run: docker network inspect force-systemd-env-002562
	W1229 07:42:37.349674  893127 cli_runner.go:211] docker network inspect force-systemd-env-002562 returned with exit code 1
	I1229 07:42:37.349701  893127 network_create.go:287] error running [docker network inspect force-systemd-env-002562]: docker network inspect force-systemd-env-002562: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-002562 not found
	I1229 07:42:37.349726  893127 network_create.go:289] output of [docker network inspect force-systemd-env-002562]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-002562 not found
	
	** /stderr **
	I1229 07:42:37.349854  893127 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:42:37.366891  893127 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
	I1229 07:42:37.367239  893127 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce89f48ecd30 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:ee:74:81:93:1a} reservation:<nil>}
	I1229 07:42:37.367560  893127 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e2ff2b4ebfe IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:c7:24:a7:9b:5d} reservation:<nil>}
	I1229 07:42:37.367772  893127 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b484aa7c2632 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:41:d7:c4:2f:ab} reservation:<nil>}
	I1229 07:42:37.368203  893127 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a68d60}
	I1229 07:42:37.368220  893127 network_create.go:124] attempt to create docker network force-systemd-env-002562 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 07:42:37.368284  893127 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-002562 force-systemd-env-002562
	I1229 07:42:37.428218  893127 network_create.go:108] docker network force-systemd-env-002562 192.168.85.0/24 created
	I1229 07:42:37.428261  893127 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-002562" container
	I1229 07:42:37.428333  893127 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:42:37.445105  893127 cli_runner.go:164] Run: docker volume create force-systemd-env-002562 --label name.minikube.sigs.k8s.io=force-systemd-env-002562 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:42:37.466433  893127 oci.go:103] Successfully created a docker volume force-systemd-env-002562
	I1229 07:42:37.466527  893127 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-002562-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-002562 --entrypoint /usr/bin/test -v force-systemd-env-002562:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:42:38.267323  893127 oci.go:107] Successfully prepared a docker volume force-systemd-env-002562
	I1229 07:42:38.267391  893127 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:42:38.267409  893127 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:42:38.267482  893127 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-002562:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:42:43.079637  893127 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-002562:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (4.812113978s)
	I1229 07:42:43.079669  893127 kic.go:203] duration metric: took 4.812255576s to extract preloaded images to volume ...
	W1229 07:42:43.079815  893127 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:42:43.079915  893127 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:42:43.177963  893127 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-002562 --name force-systemd-env-002562 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-002562 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-002562 --network force-systemd-env-002562 --ip 192.168.85.2 --volume force-systemd-env-002562:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:42:43.640292  893127 cli_runner.go:164] Run: docker container inspect force-systemd-env-002562 --format={{.State.Running}}
	I1229 07:42:43.665265  893127 cli_runner.go:164] Run: docker container inspect force-systemd-env-002562 --format={{.State.Status}}
	I1229 07:42:43.693063  893127 cli_runner.go:164] Run: docker exec force-systemd-env-002562 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:42:43.757501  893127 oci.go:144] the created container "force-systemd-env-002562" has a running status.
	I1229 07:42:43.757539  893127 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-env-002562/id_rsa...
	I1229 07:42:44.154641  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-env-002562/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1229 07:42:44.154743  893127 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-env-002562/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:42:44.186532  893127 cli_runner.go:164] Run: docker container inspect force-systemd-env-002562 --format={{.State.Status}}
	I1229 07:42:44.209050  893127 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:42:44.209071  893127 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-002562 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:42:44.274582  893127 cli_runner.go:164] Run: docker container inspect force-systemd-env-002562 --format={{.State.Status}}
	I1229 07:42:44.307941  893127 machine.go:94] provisionDockerMachine start ...
	I1229 07:42:44.308068  893127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-002562
	I1229 07:42:44.338660  893127 main.go:144] libmachine: Using SSH client type: native
	I1229 07:42:44.339121  893127 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33773 <nil> <nil>}
	I1229 07:42:44.339140  893127 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:42:44.344988  893127 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1229 07:42:47.505162  893127 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-002562
	
	I1229 07:42:47.505190  893127 ubuntu.go:182] provisioning hostname "force-systemd-env-002562"
	I1229 07:42:47.505265  893127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-002562
	I1229 07:42:47.528037  893127 main.go:144] libmachine: Using SSH client type: native
	I1229 07:42:47.528369  893127 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33773 <nil> <nil>}
	I1229 07:42:47.528388  893127 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-002562 && echo "force-systemd-env-002562" | sudo tee /etc/hostname
	I1229 07:42:47.703967  893127 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-002562
	
	I1229 07:42:47.704113  893127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-002562
	I1229 07:42:47.730908  893127 main.go:144] libmachine: Using SSH client type: native
	I1229 07:42:47.731213  893127 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33773 <nil> <nil>}
	I1229 07:42:47.731229  893127 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-002562' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-002562/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-002562' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:42:47.893694  893127 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:42:47.893725  893127 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:42:47.893766  893127 ubuntu.go:190] setting up certificates
	I1229 07:42:47.893783  893127 provision.go:84] configureAuth start
	I1229 07:42:47.893863  893127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-002562
	I1229 07:42:47.916051  893127 provision.go:143] copyHostCerts
	I1229 07:42:47.916096  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:42:47.916145  893127 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:42:47.916160  893127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:42:47.916237  893127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:42:47.916330  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:42:47.916379  893127 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:42:47.916392  893127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:42:47.916452  893127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:42:47.916532  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:42:47.916559  893127 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:42:47.916570  893127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:42:47.916620  893127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:42:47.916710  893127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-002562 san=[127.0.0.1 192.168.85.2 force-systemd-env-002562 localhost minikube]
	I1229 07:42:48.042466  893127 provision.go:177] copyRemoteCerts
	I1229 07:42:48.042541  893127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:42:48.042598  893127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-002562
	I1229 07:42:48.061780  893127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-env-002562/id_rsa Username:docker}
	I1229 07:42:48.171066  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:42:48.171128  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:42:48.215345  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:42:48.215418  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1229 07:42:48.270936  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:42:48.271221  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:42:48.322452  893127 provision.go:87] duration metric: took 428.645907ms to configureAuth
	I1229 07:42:48.322479  893127 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:42:48.322669  893127 config.go:182] Loaded profile config "force-systemd-env-002562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:42:48.322786  893127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-002562
	I1229 07:42:48.351274  893127 main.go:144] libmachine: Using SSH client type: native
	I1229 07:42:48.351976  893127 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33773 <nil> <nil>}
	I1229 07:42:48.352007  893127 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:42:48.929224  893127 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:42:48.929256  893127 machine.go:97] duration metric: took 4.621287076s to provisionDockerMachine
	I1229 07:42:48.929273  893127 client.go:176] duration metric: took 11.615064478s to LocalClient.Create
	I1229 07:42:48.929287  893127 start.go:167] duration metric: took 11.615115416s to libmachine.API.Create "force-systemd-env-002562"
	I1229 07:42:48.929294  893127 start.go:293] postStartSetup for "force-systemd-env-002562" (driver="docker")
	I1229 07:42:48.929304  893127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:42:48.929367  893127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:42:48.929406  893127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-002562
	I1229 07:42:48.953900  893127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-env-002562/id_rsa Username:docker}
	I1229 07:42:49.078269  893127 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:42:49.083751  893127 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:42:49.083779  893127 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:42:49.083791  893127 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:42:49.083852  893127 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:42:49.083937  893127 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:42:49.083945  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> /etc/ssl/certs/7418542.pem
	I1229 07:42:49.084044  893127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:42:49.098547  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:42:49.128051  893127 start.go:296] duration metric: took 198.741295ms for postStartSetup
	I1229 07:42:49.128491  893127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-002562
	I1229 07:42:49.184945  893127 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/config.json ...
	I1229 07:42:49.185222  893127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:42:49.185264  893127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-002562
	I1229 07:42:49.249863  893127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-env-002562/id_rsa Username:docker}
	I1229 07:42:49.365849  893127 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:42:49.382209  893127 start.go:128] duration metric: took 12.072154187s to createHost
	I1229 07:42:49.382254  893127 start.go:83] releasing machines lock for "force-systemd-env-002562", held for 12.072274788s
	I1229 07:42:49.382336  893127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-002562
	I1229 07:42:49.454071  893127 ssh_runner.go:195] Run: cat /version.json
	I1229 07:42:49.454139  893127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-002562
	I1229 07:42:49.454085  893127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:42:49.454308  893127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-002562
	I1229 07:42:49.544316  893127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-env-002562/id_rsa Username:docker}
	I1229 07:42:49.559447  893127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33773 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-env-002562/id_rsa Username:docker}
	I1229 07:42:49.668743  893127 ssh_runner.go:195] Run: systemctl --version
	I1229 07:42:49.784063  893127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:42:49.828014  893127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:42:49.833060  893127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:42:49.833144  893127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:42:49.878429  893127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:42:49.878455  893127 start.go:496] detecting cgroup driver to use...
	I1229 07:42:49.878473  893127 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1229 07:42:49.878530  893127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:42:49.903483  893127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:42:49.917776  893127 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:42:49.917841  893127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:42:49.936258  893127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:42:49.958304  893127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:42:50.162525  893127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:42:50.323098  893127 docker.go:234] disabling docker service ...
	I1229 07:42:50.323169  893127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:42:50.352147  893127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:42:50.367949  893127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:42:50.521003  893127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:42:50.676883  893127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:42:50.694805  893127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:42:50.721345  893127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:42:50.721419  893127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:42:50.731933  893127 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:42:50.732018  893127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:42:50.747663  893127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:42:50.767284  893127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:42:50.776968  893127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:42:50.785750  893127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:42:50.796326  893127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:42:50.812706  893127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:42:50.821896  893127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:42:50.829949  893127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:42:50.837669  893127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:42:50.980422  893127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:42:51.776761  893127 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:42:51.776866  893127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:42:51.781345  893127 start.go:574] Will wait 60s for crictl version
	I1229 07:42:51.781409  893127 ssh_runner.go:195] Run: which crictl
	I1229 07:42:51.785581  893127 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:42:51.813451  893127 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:42:51.813537  893127 ssh_runner.go:195] Run: crio --version
	I1229 07:42:51.845518  893127 ssh_runner.go:195] Run: crio --version
	I1229 07:42:51.887342  893127 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:42:51.890123  893127 cli_runner.go:164] Run: docker network inspect force-systemd-env-002562 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:42:51.906945  893127 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:42:51.910913  893127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:42:51.921340  893127 kubeadm.go:884] updating cluster {Name:force-systemd-env-002562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-002562 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:42:51.921452  893127 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:42:51.921507  893127 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:42:51.980634  893127 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:42:51.980655  893127 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:42:51.980714  893127 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:42:52.035673  893127 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:42:52.035694  893127 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:42:52.035702  893127 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1229 07:42:52.035787  893127 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-002562 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-002562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:42:52.035863  893127 ssh_runner.go:195] Run: crio config
	I1229 07:42:52.123685  893127 cni.go:84] Creating CNI manager for ""
	I1229 07:42:52.123757  893127 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:42:52.123789  893127 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:42:52.123841  893127 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-002562 NodeName:force-systemd-env-002562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:42:52.124007  893127 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-002562"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:42:52.124113  893127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:42:52.133079  893127 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:42:52.133199  893127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:42:52.143044  893127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1229 07:42:52.160528  893127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:42:52.175226  893127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1229 07:42:52.192486  893127 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:42:52.201441  893127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:42:52.217525  893127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:42:52.352961  893127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:42:52.372643  893127 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562 for IP: 192.168.85.2
	I1229 07:42:52.372668  893127 certs.go:195] generating shared ca certs ...
	I1229 07:42:52.372684  893127 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:42:52.372842  893127 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:42:52.372898  893127 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:42:52.372912  893127 certs.go:257] generating profile certs ...
	I1229 07:42:52.372976  893127 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/client.key
	I1229 07:42:52.372998  893127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/client.crt with IP's: []
	I1229 07:42:52.722459  893127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/client.crt ...
	I1229 07:42:52.722535  893127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/client.crt: {Name:mka1f9d1fe4331263b80d0fdbdbce6692a9db88d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:42:52.722785  893127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/client.key ...
	I1229 07:42:52.722825  893127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/client.key: {Name:mk558d9a813678b807b37a541a2a28bc266256b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:42:52.722983  893127 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.key.1d660d70
	I1229 07:42:52.723045  893127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.crt.1d660d70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1229 07:42:53.078307  893127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.crt.1d660d70 ...
	I1229 07:42:53.078381  893127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.crt.1d660d70: {Name:mke236d4fcd019933190eef7c9b62ecfeea60b35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:42:53.078590  893127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.key.1d660d70 ...
	I1229 07:42:53.078627  893127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.key.1d660d70: {Name:mke3e6862d1b249afb8491b85ee68b476ef86652 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:42:53.078762  893127 certs.go:382] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.crt.1d660d70 -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.crt
	I1229 07:42:53.078896  893127 certs.go:386] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.key.1d660d70 -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.key
	I1229 07:42:53.078987  893127 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/proxy-client.key
	I1229 07:42:53.079048  893127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/proxy-client.crt with IP's: []
	I1229 07:42:53.443656  893127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/proxy-client.crt ...
	I1229 07:42:53.443683  893127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/proxy-client.crt: {Name:mka5c4980f4a4e2de85dc016d9a0c082448c538b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:42:53.443850  893127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/proxy-client.key ...
	I1229 07:42:53.443859  893127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/proxy-client.key: {Name:mk122f3adac6a565b89e1fa6308776296572d093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:42:53.443924  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:42:53.443940  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:42:53.443953  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:42:53.443965  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:42:53.443975  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 07:42:53.443988  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 07:42:53.443999  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 07:42:53.444012  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 07:42:53.444060  893127 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:42:53.444095  893127 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:42:53.444103  893127 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:42:53.444127  893127 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:42:53.444151  893127 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:42:53.444173  893127 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:42:53.444220  893127 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:42:53.444253  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> /usr/share/ca-certificates/7418542.pem
	I1229 07:42:53.444266  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:42:53.444276  893127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem -> /usr/share/ca-certificates/741854.pem
	I1229 07:42:53.444869  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:42:53.464042  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:42:53.482103  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:42:53.499789  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:42:53.520219  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1229 07:42:53.539963  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:42:53.559568  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:42:53.579327  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-env-002562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:42:53.598650  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:42:53.618156  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:42:53.637511  893127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:42:53.657082  893127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:42:53.671177  893127 ssh_runner.go:195] Run: openssl version
	I1229 07:42:53.678020  893127 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:42:53.686009  893127 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:42:53.694023  893127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:42:53.698060  893127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:42:53.698120  893127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:42:53.741112  893127 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:42:53.749542  893127 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/741854.pem /etc/ssl/certs/51391683.0
	I1229 07:42:53.757772  893127 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:42:53.766180  893127 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:42:53.774374  893127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:42:53.779029  893127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:42:53.779147  893127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:42:53.820666  893127 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:42:53.829120  893127 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7418542.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:42:53.837424  893127 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:42:53.846086  893127 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:42:53.854687  893127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:42:53.859480  893127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:42:53.859596  893127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:42:53.901257  893127 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:42:53.910083  893127 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:42:53.919034  893127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:42:53.924134  893127 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:42:53.924257  893127 kubeadm.go:401] StartCluster: {Name:force-systemd-env-002562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-002562 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:42:53.924368  893127 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:42:53.924469  893127 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:42:53.975470  893127 cri.go:96] found id: ""
	I1229 07:42:53.975635  893127 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:42:53.985218  893127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:42:53.994370  893127 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:42:53.994436  893127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:42:54.005006  893127 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:42:54.005038  893127 kubeadm.go:158] found existing configuration files:
	
	I1229 07:42:54.005108  893127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:42:54.014930  893127 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:42:54.015010  893127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:42:54.023385  893127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:42:54.032667  893127 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:42:54.032737  893127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:42:54.040863  893127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:42:54.049565  893127 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:42:54.049638  893127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:42:54.057477  893127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:42:54.066246  893127 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:42:54.066324  893127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:42:54.074013  893127 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:42:54.132071  893127 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:42:54.132702  893127 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:42:54.276494  893127 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:42:54.276651  893127 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:42:54.276726  893127 kubeadm.go:319] OS: Linux
	I1229 07:42:54.276833  893127 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:42:54.276914  893127 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:42:54.277024  893127 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:42:54.277108  893127 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:42:54.277198  893127 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:42:54.277291  893127 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:42:54.277376  893127 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:42:54.277458  893127 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:42:54.277550  893127 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:42:54.373253  893127 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:42:54.373437  893127 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:42:54.373561  893127 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:42:54.385360  893127 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:42:54.391345  893127 out.go:252]   - Generating certificates and keys ...
	I1229 07:42:54.391510  893127 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:42:54.391615  893127 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:42:54.439544  893127 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:42:55.253165  893127 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:42:55.383292  893127 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:42:55.637344  893127 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:42:56.059998  893127 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:42:56.060535  893127 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-002562 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:42:56.170276  893127 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:42:56.171271  893127 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-002562 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:42:56.336679  893127 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:42:56.493368  893127 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:42:56.677753  893127 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:42:56.678220  893127 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:42:57.081441  893127 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:42:57.528738  893127 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:42:58.357058  893127 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:42:58.788030  893127 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:42:58.986754  893127 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:42:58.987752  893127 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:42:58.992079  893127 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:42:58.995547  893127 out.go:252]   - Booting up control plane ...
	I1229 07:42:58.995673  893127 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:42:58.996109  893127 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:42:58.997887  893127 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:42:59.017620  893127 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:42:59.017751  893127 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:42:59.027165  893127 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:42:59.027266  893127 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:42:59.027306  893127 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:42:59.189189  893127 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:42:59.189320  893127 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:46:59.190051  893127 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001110158s
	I1229 07:46:59.190139  893127 kubeadm.go:319] 
	I1229 07:46:59.190228  893127 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:46:59.190268  893127 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:46:59.190393  893127 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:46:59.190403  893127 kubeadm.go:319] 
	I1229 07:46:59.190526  893127 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:46:59.190578  893127 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:46:59.190617  893127 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:46:59.190626  893127 kubeadm.go:319] 
	I1229 07:46:59.195380  893127 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:46:59.195848  893127 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:46:59.195963  893127 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:46:59.196238  893127 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1229 07:46:59.196264  893127 kubeadm.go:319] 
	I1229 07:46:59.196335  893127 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1229 07:46:59.196516  893127 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-002562 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-002562 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001110158s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-002562 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-002562 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001110158s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:46:59.196599  893127 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1229 07:46:59.613172  893127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:46:59.625944  893127 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:46:59.626010  893127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:46:59.633721  893127 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:46:59.633741  893127 kubeadm.go:158] found existing configuration files:
	
	I1229 07:46:59.633792  893127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:46:59.641508  893127 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:46:59.641576  893127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:46:59.648971  893127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:46:59.656917  893127 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:46:59.656984  893127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:46:59.664603  893127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:46:59.672712  893127 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:46:59.672778  893127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:46:59.680629  893127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:46:59.688484  893127 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:46:59.688556  893127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:46:59.696366  893127 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:46:59.735206  893127 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:46:59.735324  893127 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:46:59.807206  893127 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:46:59.807345  893127 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:46:59.807403  893127 kubeadm.go:319] OS: Linux
	I1229 07:46:59.807477  893127 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:46:59.807557  893127 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:46:59.807630  893127 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:46:59.807707  893127 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:46:59.807783  893127 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:46:59.807860  893127 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:46:59.807933  893127 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:46:59.808011  893127 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:46:59.808083  893127 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:46:59.873250  893127 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:46:59.873363  893127 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:46:59.873457  893127 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:46:59.880316  893127 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:46:59.883885  893127 out.go:252]   - Generating certificates and keys ...
	I1229 07:46:59.883998  893127 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:46:59.884097  893127 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:46:59.884189  893127 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 07:46:59.884262  893127 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1229 07:46:59.884343  893127 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1229 07:46:59.884405  893127 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1229 07:46:59.884477  893127 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1229 07:46:59.884547  893127 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1229 07:46:59.884632  893127 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 07:46:59.884714  893127 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 07:46:59.884766  893127 kubeadm.go:319] [certs] Using the existing "sa" key
	I1229 07:46:59.884905  893127 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:47:00.155322  893127 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:47:00.566020  893127 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:47:00.728723  893127 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:47:00.899094  893127 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:47:01.178988  893127 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:47:01.179709  893127 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:47:01.182632  893127 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:47:01.185833  893127 out.go:252]   - Booting up control plane ...
	I1229 07:47:01.185985  893127 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:47:01.186077  893127 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:47:01.187046  893127 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:47:01.204351  893127 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:47:01.204544  893127 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:47:01.213119  893127 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:47:01.213471  893127 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:47:01.213716  893127 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:47:01.368324  893127 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:47:01.368773  893127 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:51:01.369265  893127 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000750038s
	I1229 07:51:01.369294  893127 kubeadm.go:319] 
	I1229 07:51:01.369351  893127 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:51:01.369392  893127 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:51:01.369504  893127 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:51:01.369510  893127 kubeadm.go:319] 
	I1229 07:51:01.369621  893127 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:51:01.369654  893127 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:51:01.369685  893127 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:51:01.369689  893127 kubeadm.go:319] 
	I1229 07:51:01.374417  893127 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:51:01.375145  893127 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:51:01.375292  893127 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:51:01.375628  893127 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1229 07:51:01.375662  893127 kubeadm.go:319] 
	I1229 07:51:01.375797  893127 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:51:01.375835  893127 kubeadm.go:403] duration metric: took 8m7.451583298s to StartCluster
	I1229 07:51:01.375876  893127 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:51:01.375949  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:51:01.403734  893127 cri.go:96] found id: ""
	I1229 07:51:01.403768  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.403778  893127 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:51:01.403785  893127 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:51:01.403840  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:51:01.434138  893127 cri.go:96] found id: ""
	I1229 07:51:01.434164  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.434173  893127 logs.go:284] No container was found matching "etcd"
	I1229 07:51:01.434179  893127 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:51:01.434241  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:51:01.467277  893127 cri.go:96] found id: ""
	I1229 07:51:01.467304  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.467313  893127 logs.go:284] No container was found matching "coredns"
	I1229 07:51:01.467319  893127 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:51:01.467378  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:51:01.504133  893127 cri.go:96] found id: ""
	I1229 07:51:01.504161  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.504170  893127 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:51:01.504177  893127 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:51:01.504237  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:51:01.534862  893127 cri.go:96] found id: ""
	I1229 07:51:01.534891  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.534900  893127 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:51:01.534906  893127 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:51:01.534974  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:51:01.559819  893127 cri.go:96] found id: ""
	I1229 07:51:01.559845  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.559854  893127 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:51:01.559861  893127 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:51:01.559918  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:51:01.586175  893127 cri.go:96] found id: ""
	I1229 07:51:01.586213  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.586222  893127 logs.go:284] No container was found matching "kindnet"
	I1229 07:51:01.586232  893127 logs.go:123] Gathering logs for kubelet ...
	I1229 07:51:01.586245  893127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:51:01.653160  893127 logs.go:123] Gathering logs for dmesg ...
	I1229 07:51:01.653199  893127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:51:01.671329  893127 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:51:01.671358  893127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:51:01.737428  893127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:51:01.729107    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.729834    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.731327    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.731800    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.733246    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1229 07:51:01.729107    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.729834    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.731327    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.731800    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.733246    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:51:01.737450  893127 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:51:01.737463  893127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:51:01.769484  893127 logs.go:123] Gathering logs for container status ...
	I1229 07:51:01.769519  893127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1229 07:51:01.798513  893127 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000750038s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:51:01.798575  893127 out.go:285] * 
	* 
	W1229 07:51:01.798643  893127 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000750038s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000750038s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:51:01.799007  893127 out.go:285] * 
	* 
	W1229 07:51:01.799471  893127 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:51:01.806278  893127 out.go:203] 
	W1229 07:51:01.808995  893127 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000750038s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000750038s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:51:01.809264  893127 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1229 07:51:01.809337  893127 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1229 07:51:01.812564  893127 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-002562 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-29 07:51:01.866707908 +0000 UTC m=+2503.173665513
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-002562
helpers_test.go:244: (dbg) docker inspect force-systemd-env-002562:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8afa32393c3f32450e7e8523efe2140e3d9669256a35aa92d10febb7a251a40d",
	        "Created": "2025-12-29T07:42:43.195246993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 893706,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:42:43.358734785Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/8afa32393c3f32450e7e8523efe2140e3d9669256a35aa92d10febb7a251a40d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8afa32393c3f32450e7e8523efe2140e3d9669256a35aa92d10febb7a251a40d/hostname",
	        "HostsPath": "/var/lib/docker/containers/8afa32393c3f32450e7e8523efe2140e3d9669256a35aa92d10febb7a251a40d/hosts",
	        "LogPath": "/var/lib/docker/containers/8afa32393c3f32450e7e8523efe2140e3d9669256a35aa92d10febb7a251a40d/8afa32393c3f32450e7e8523efe2140e3d9669256a35aa92d10febb7a251a40d-json.log",
	        "Name": "/force-systemd-env-002562",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-002562:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-002562",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8afa32393c3f32450e7e8523efe2140e3d9669256a35aa92d10febb7a251a40d",
	                "LowerDir": "/var/lib/docker/overlay2/ad857a8e46b68d8d64eb6b3a83819f206138add6e220e9c6ee50003ebcfd10d9-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad857a8e46b68d8d64eb6b3a83819f206138add6e220e9c6ee50003ebcfd10d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad857a8e46b68d8d64eb6b3a83819f206138add6e220e9c6ee50003ebcfd10d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad857a8e46b68d8d64eb6b3a83819f206138add6e220e9c6ee50003ebcfd10d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-002562",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-002562/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-002562",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-002562",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-002562",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3d4318b7be8b603eae9e5a716369385f93757bf2984f709d3e5ac0f7d4d58ba2",
	            "SandboxKey": "/var/run/docker/netns/3d4318b7be8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33773"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33774"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33777"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33775"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33776"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-002562": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:1a:77:0d:6d:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7e5edd416f4901f3ba5b405d1c83eb58498da6920c9abb27332b8faf814ce17e",
	                    "EndpointID": "6e0024acb5b22ab96560dfdf66e5ce58d07c00e712f7d5e5ba59b44995ccd3ab",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-002562",
	                        "8afa32393c3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-002562 -n force-systemd-env-002562
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-002562 -n force-systemd-env-002562: exit status 6 (370.932ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:51:02.258711  916159 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-002562" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-002562 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-095300 sudo cat /etc/kubernetes/kubelet.conf                                                                      │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl cat docker --no-pager                                                                       │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /etc/docker/daemon.json                                                                           │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo docker system info                                                                                    │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cri-dockerd --version                                                                                 │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl cat containerd --no-pager                                                                   │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /etc/containerd/config.toml                                                                       │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo containerd config dump                                                                                │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl cat crio --no-pager                                                                         │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo crio config                                                                                           │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ delete  │ -p cilium-095300                                                                                                            │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │ 29 Dec 25 07:45 UTC │
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │ 29 Dec 25 07:46 UTC │
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                   │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ delete  │ -p cert-expiration-853545                                                                                                   │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ start   │ -p force-systemd-flag-122604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-122604 │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:49:35
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:49:35.249268  913056 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:49:35.249374  913056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:49:35.249415  913056 out.go:374] Setting ErrFile to fd 2...
	I1229 07:49:35.249422  913056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:49:35.249683  913056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:49:35.250107  913056 out.go:368] Setting JSON to false
	I1229 07:49:35.250963  913056 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16327,"bootTime":1766978248,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:49:35.251033  913056 start.go:143] virtualization:  
	I1229 07:49:35.255392  913056 out.go:179] * [force-systemd-flag-122604] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:49:35.260063  913056 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:49:35.260132  913056 notify.go:221] Checking for updates...
	I1229 07:49:35.263828  913056 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:49:35.267214  913056 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:49:35.270527  913056 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:49:35.273814  913056 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:49:35.276989  913056 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:49:35.280654  913056 config.go:182] Loaded profile config "force-systemd-env-002562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:49:35.280796  913056 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:49:35.313105  913056 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:49:35.313219  913056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:49:35.376584  913056 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:49:35.367132861 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:49:35.376694  913056 docker.go:319] overlay module found
	I1229 07:49:35.382052  913056 out.go:179] * Using the docker driver based on user configuration
	I1229 07:49:35.384975  913056 start.go:309] selected driver: docker
	I1229 07:49:35.384996  913056 start.go:928] validating driver "docker" against <nil>
	I1229 07:49:35.385009  913056 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:49:35.385763  913056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:49:35.441289  913056 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:49:35.431732978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:49:35.441443  913056 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:49:35.441658  913056 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 07:49:35.444852  913056 out.go:179] * Using Docker driver with root privileges
	I1229 07:49:35.447908  913056 cni.go:84] Creating CNI manager for ""
	I1229 07:49:35.447976  913056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:49:35.447989  913056 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:49:35.448076  913056 start.go:353] cluster config:
	{Name:force-systemd-flag-122604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-122604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:49:35.451515  913056 out.go:179] * Starting "force-systemd-flag-122604" primary control-plane node in "force-systemd-flag-122604" cluster
	I1229 07:49:35.454354  913056 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:49:35.457378  913056 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:49:35.460206  913056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:49:35.460266  913056 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:49:35.460284  913056 cache.go:65] Caching tarball of preloaded images
	I1229 07:49:35.460290  913056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:49:35.460376  913056 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:49:35.460387  913056 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:49:35.460495  913056 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/config.json ...
	I1229 07:49:35.460512  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/config.json: {Name:mk1bf0ef3d904e9a3fc3e557b78cfdfa2369b401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:35.479890  913056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:49:35.479915  913056 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:49:35.479933  913056 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:49:35.479965  913056 start.go:360] acquireMachinesLock for force-systemd-flag-122604: {Name:mkce59187063294ba30aa11e10ce77e64adfbebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:49:35.480070  913056 start.go:364] duration metric: took 83.291µs to acquireMachinesLock for "force-systemd-flag-122604"
	I1229 07:49:35.480101  913056 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-122604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-122604 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:49:35.480180  913056 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:49:35.485356  913056 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:49:35.485611  913056 start.go:159] libmachine.API.Create for "force-systemd-flag-122604" (driver="docker")
	I1229 07:49:35.485654  913056 client.go:173] LocalClient.Create starting
	I1229 07:49:35.485728  913056 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 07:49:35.485772  913056 main.go:144] libmachine: Decoding PEM data...
	I1229 07:49:35.485790  913056 main.go:144] libmachine: Parsing certificate...
	I1229 07:49:35.485846  913056 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 07:49:35.485869  913056 main.go:144] libmachine: Decoding PEM data...
	I1229 07:49:35.485884  913056 main.go:144] libmachine: Parsing certificate...
	I1229 07:49:35.486262  913056 cli_runner.go:164] Run: docker network inspect force-systemd-flag-122604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:49:35.501878  913056 cli_runner.go:211] docker network inspect force-systemd-flag-122604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:49:35.501973  913056 network_create.go:284] running [docker network inspect force-systemd-flag-122604] to gather additional debugging logs...
	I1229 07:49:35.501996  913056 cli_runner.go:164] Run: docker network inspect force-systemd-flag-122604
	W1229 07:49:35.518029  913056 cli_runner.go:211] docker network inspect force-systemd-flag-122604 returned with exit code 1
	I1229 07:49:35.518074  913056 network_create.go:287] error running [docker network inspect force-systemd-flag-122604]: docker network inspect force-systemd-flag-122604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-122604 not found
	I1229 07:49:35.518111  913056 network_create.go:289] output of [docker network inspect force-systemd-flag-122604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-122604 not found
	
	** /stderr **
	I1229 07:49:35.518212  913056 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:49:35.540507  913056 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
	I1229 07:49:35.540966  913056 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce89f48ecd30 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:ee:74:81:93:1a} reservation:<nil>}
	I1229 07:49:35.541445  913056 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e2ff2b4ebfe IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:c7:24:a7:9b:5d} reservation:<nil>}
	I1229 07:49:35.541992  913056 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fefd0}
	I1229 07:49:35.542034  913056 network_create.go:124] attempt to create docker network force-systemd-flag-122604 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1229 07:49:35.542113  913056 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-122604 force-systemd-flag-122604
	I1229 07:49:35.607706  913056 network_create.go:108] docker network force-systemd-flag-122604 192.168.76.0/24 created
	I1229 07:49:35.607742  913056 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-122604" container
	I1229 07:49:35.607816  913056 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:49:35.624216  913056 cli_runner.go:164] Run: docker volume create force-systemd-flag-122604 --label name.minikube.sigs.k8s.io=force-systemd-flag-122604 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:49:35.643301  913056 oci.go:103] Successfully created a docker volume force-systemd-flag-122604
	I1229 07:49:35.643389  913056 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-122604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-122604 --entrypoint /usr/bin/test -v force-systemd-flag-122604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:49:36.199718  913056 oci.go:107] Successfully prepared a docker volume force-systemd-flag-122604
	I1229 07:49:36.199781  913056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:49:36.199791  913056 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:49:36.199875  913056 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-122604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:49:40.097554  913056 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-122604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.897621835s)
	I1229 07:49:40.097592  913056 kic.go:203] duration metric: took 3.897797409s to extract preloaded images to volume ...
	W1229 07:49:40.097741  913056 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:49:40.097863  913056 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:49:40.150163  913056 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-122604 --name force-systemd-flag-122604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-122604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-122604 --network force-systemd-flag-122604 --ip 192.168.76.2 --volume force-systemd-flag-122604:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:49:40.465542  913056 cli_runner.go:164] Run: docker container inspect force-systemd-flag-122604 --format={{.State.Running}}
	I1229 07:49:40.489118  913056 cli_runner.go:164] Run: docker container inspect force-systemd-flag-122604 --format={{.State.Status}}
	I1229 07:49:40.519877  913056 cli_runner.go:164] Run: docker exec force-systemd-flag-122604 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:49:40.581583  913056 oci.go:144] the created container "force-systemd-flag-122604" has a running status.
	I1229 07:49:40.581611  913056 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa...
	I1229 07:49:40.787556  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1229 07:49:40.787654  913056 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:49:40.815266  913056 cli_runner.go:164] Run: docker container inspect force-systemd-flag-122604 --format={{.State.Status}}
	I1229 07:49:40.842840  913056 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:49:40.842867  913056 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-122604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:49:40.909683  913056 cli_runner.go:164] Run: docker container inspect force-systemd-flag-122604 --format={{.State.Status}}
	I1229 07:49:40.940519  913056 machine.go:94] provisionDockerMachine start ...
	I1229 07:49:40.940720  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:40.971534  913056 main.go:144] libmachine: Using SSH client type: native
	I1229 07:49:40.971879  913056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33803 <nil> <nil>}
	I1229 07:49:40.971888  913056 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:49:40.972474  913056 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59008->127.0.0.1:33803: read: connection reset by peer
	I1229 07:49:44.128475  913056 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-122604
	
	I1229 07:49:44.128498  913056 ubuntu.go:182] provisioning hostname "force-systemd-flag-122604"
	I1229 07:49:44.128576  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:44.145938  913056 main.go:144] libmachine: Using SSH client type: native
	I1229 07:49:44.146256  913056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33803 <nil> <nil>}
	I1229 07:49:44.146298  913056 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-122604 && echo "force-systemd-flag-122604" | sudo tee /etc/hostname
	I1229 07:49:44.314558  913056 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-122604
	
	I1229 07:49:44.314638  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:44.333419  913056 main.go:144] libmachine: Using SSH client type: native
	I1229 07:49:44.333768  913056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33803 <nil> <nil>}
	I1229 07:49:44.333793  913056 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-122604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-122604/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-122604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:49:44.485315  913056 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:49:44.485355  913056 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:49:44.485377  913056 ubuntu.go:190] setting up certificates
	I1229 07:49:44.485392  913056 provision.go:84] configureAuth start
	I1229 07:49:44.485474  913056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-122604
	I1229 07:49:44.506460  913056 provision.go:143] copyHostCerts
	I1229 07:49:44.506524  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:49:44.506575  913056 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:49:44.506586  913056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:49:44.506728  913056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:49:44.506861  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:49:44.506886  913056 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:49:44.506894  913056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:49:44.506924  913056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:49:44.507003  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:49:44.507059  913056 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:49:44.507068  913056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:49:44.507106  913056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:49:44.507195  913056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-122604 san=[127.0.0.1 192.168.76.2 force-systemd-flag-122604 localhost minikube]
	I1229 07:49:44.771898  913056 provision.go:177] copyRemoteCerts
	I1229 07:49:44.771968  913056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:49:44.772035  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:44.790622  913056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa Username:docker}
	I1229 07:49:44.897529  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:49:44.897674  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:49:44.915578  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:49:44.915649  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1229 07:49:44.933629  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:49:44.933718  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:49:44.955290  913056 provision.go:87] duration metric: took 469.882491ms to configureAuth
	I1229 07:49:44.955315  913056 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:49:44.955514  913056 config.go:182] Loaded profile config "force-systemd-flag-122604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:49:44.955627  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:44.975562  913056 main.go:144] libmachine: Using SSH client type: native
	I1229 07:49:44.975870  913056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33803 <nil> <nil>}
	I1229 07:49:44.975885  913056 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:49:45.363967  913056 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:49:45.363992  913056 machine.go:97] duration metric: took 4.423451846s to provisionDockerMachine
	I1229 07:49:45.364004  913056 client.go:176] duration metric: took 9.87833796s to LocalClient.Create
	I1229 07:49:45.364019  913056 start.go:167] duration metric: took 9.878409371s to libmachine.API.Create "force-systemd-flag-122604"
	I1229 07:49:45.364026  913056 start.go:293] postStartSetup for "force-systemd-flag-122604" (driver="docker")
	I1229 07:49:45.364052  913056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:49:45.364120  913056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:49:45.364171  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:45.382554  913056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa Username:docker}
	I1229 07:49:45.489966  913056 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:49:45.493442  913056 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:49:45.493526  913056 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:49:45.493545  913056 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:49:45.493604  913056 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:49:45.493693  913056 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:49:45.493705  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> /etc/ssl/certs/7418542.pem
	I1229 07:49:45.493816  913056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:49:45.501496  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:49:45.518943  913056 start.go:296] duration metric: took 154.900452ms for postStartSetup
	I1229 07:49:45.519317  913056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-122604
	I1229 07:49:45.536352  913056 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/config.json ...
	I1229 07:49:45.536636  913056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:49:45.536677  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:45.553541  913056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa Username:docker}
	I1229 07:49:45.658023  913056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:49:45.663073  913056 start.go:128] duration metric: took 10.182874668s to createHost
	I1229 07:49:45.663102  913056 start.go:83] releasing machines lock for "force-systemd-flag-122604", held for 10.183016109s
	I1229 07:49:45.663233  913056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-122604
	I1229 07:49:45.680754  913056 ssh_runner.go:195] Run: cat /version.json
	I1229 07:49:45.680838  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:45.680893  913056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:49:45.680960  913056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-122604
	I1229 07:49:45.706663  913056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa Username:docker}
	I1229 07:49:45.714684  913056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33803 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/force-systemd-flag-122604/id_rsa Username:docker}
	I1229 07:49:45.905273  913056 ssh_runner.go:195] Run: systemctl --version
	I1229 07:49:45.911874  913056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:49:45.949399  913056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:49:45.953891  913056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:49:45.953973  913056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:49:45.982134  913056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:49:45.982158  913056 start.go:496] detecting cgroup driver to use...
	I1229 07:49:45.982172  913056 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1229 07:49:45.982252  913056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:49:46.001208  913056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:49:46.016717  913056 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:49:46.016840  913056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:49:46.035267  913056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:49:46.054195  913056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:49:46.176312  913056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:49:46.301685  913056 docker.go:234] disabling docker service ...
	I1229 07:49:46.301756  913056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:49:46.323995  913056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:49:46.337586  913056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:49:46.475207  913056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:49:46.604163  913056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:49:46.618065  913056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:49:46.632608  913056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:49:46.632764  913056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.642122  913056 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1229 07:49:46.642195  913056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.651077  913056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.659900  913056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.669139  913056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:49:46.677350  913056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.686430  913056 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.700738  913056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:49:46.709792  913056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:49:46.717095  913056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:49:46.724633  913056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:49:46.837291  913056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:49:47.013749  913056 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:49:47.013859  913056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:49:47.017928  913056 start.go:574] Will wait 60s for crictl version
	I1229 07:49:47.018025  913056 ssh_runner.go:195] Run: which crictl
	I1229 07:49:47.021782  913056 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:49:47.051254  913056 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:49:47.051342  913056 ssh_runner.go:195] Run: crio --version
	I1229 07:49:47.080006  913056 ssh_runner.go:195] Run: crio --version
	I1229 07:49:47.113981  913056 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:49:47.116929  913056 cli_runner.go:164] Run: docker network inspect force-systemd-flag-122604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:49:47.134029  913056 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1229 07:49:47.137842  913056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:49:47.148094  913056 kubeadm.go:884] updating cluster {Name:force-systemd-flag-122604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-122604 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:49:47.148233  913056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:49:47.148298  913056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:49:47.189088  913056 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:49:47.189116  913056 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:49:47.189175  913056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:49:47.227010  913056 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:49:47.227034  913056 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:49:47.227042  913056 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1229 07:49:47.227131  913056 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-122604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-122604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:49:47.227213  913056 ssh_runner.go:195] Run: crio config
	I1229 07:49:47.313552  913056 cni.go:84] Creating CNI manager for ""
	I1229 07:49:47.313579  913056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:49:47.313599  913056 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:49:47.313623  913056 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-122604 NodeName:force-systemd-flag-122604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:49:47.313746  913056 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-122604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:49:47.313818  913056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:49:47.322877  913056 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:49:47.322959  913056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:49:47.331120  913056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1229 07:49:47.344904  913056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:49:47.358841  913056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1229 07:49:47.372273  913056 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:49:47.376310  913056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:49:47.386738  913056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:49:47.500574  913056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:49:47.517197  913056 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604 for IP: 192.168.76.2
	I1229 07:49:47.517261  913056 certs.go:195] generating shared ca certs ...
	I1229 07:49:47.517293  913056 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:47.517486  913056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:49:47.517559  913056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:49:47.517595  913056 certs.go:257] generating profile certs ...
	I1229 07:49:47.517671  913056 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/client.key
	I1229 07:49:47.517707  913056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/client.crt with IP's: []
	I1229 07:49:47.816408  913056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/client.crt ...
	I1229 07:49:47.816441  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/client.crt: {Name:mk33584c9debe594af3bebb9cb8258d1031bb064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:47.816639  913056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/client.key ...
	I1229 07:49:47.816653  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/client.key: {Name:mk8e0a0e3fdd19f8261ca7bb26bd741f3e0ca42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:47.816751  913056 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key.4739ec82
	I1229 07:49:47.816770  913056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt.4739ec82 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1229 07:49:48.089355  913056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt.4739ec82 ...
	I1229 07:49:48.089390  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt.4739ec82: {Name:mk08aa586a105be4dae2eb603280608008f5a395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:48.089585  913056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key.4739ec82 ...
	I1229 07:49:48.089601  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key.4739ec82: {Name:mk36e94961f60c3ee70d884be8575a3a8ec332f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:48.089684  913056 certs.go:382] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt.4739ec82 -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt
	I1229 07:49:48.089773  913056 certs.go:386] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key.4739ec82 -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key
	I1229 07:49:48.089835  913056 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.key
	I1229 07:49:48.089854  913056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.crt with IP's: []
	I1229 07:49:48.293637  913056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.crt ...
	I1229 07:49:48.293669  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.crt: {Name:mk5683a320bd98514c9c3e4cbb9c75305f72f097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:48.293851  913056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.key ...
	I1229 07:49:48.293865  913056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.key: {Name:mk1c68b6163de22fce3f17027fca250f69290ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:49:48.293956  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:49:48.293978  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:49:48.293993  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:49:48.294009  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:49:48.294020  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 07:49:48.294036  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 07:49:48.294053  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 07:49:48.294063  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 07:49:48.294119  913056 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:49:48.294161  913056 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:49:48.294176  913056 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:49:48.294201  913056 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:49:48.294230  913056 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:49:48.294261  913056 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:49:48.294312  913056 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:49:48.294346  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:49:48.294361  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem -> /usr/share/ca-certificates/741854.pem
	I1229 07:49:48.294372  913056 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> /usr/share/ca-certificates/7418542.pem
	I1229 07:49:48.294886  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:49:48.314800  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:49:48.332474  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:49:48.351130  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:49:48.369256  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1229 07:49:48.387101  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:49:48.404617  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:49:48.421876  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/force-systemd-flag-122604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:49:48.439705  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:49:48.457532  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:49:48.474996  913056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:49:48.492037  913056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:49:48.504947  913056 ssh_runner.go:195] Run: openssl version
	I1229 07:49:48.511278  913056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:49:48.519120  913056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:49:48.526746  913056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:49:48.530522  913056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:49:48.530594  913056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:49:48.571311  913056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:49:48.578901  913056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7418542.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:49:48.586388  913056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:49:48.593903  913056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:49:48.601189  913056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:49:48.604848  913056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:49:48.604915  913056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:49:48.645906  913056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:49:48.653539  913056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:49:48.661174  913056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:49:48.668565  913056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:49:48.676484  913056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:49:48.680530  913056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:49:48.680603  913056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:49:48.722648  913056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:49:48.730502  913056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/741854.pem /etc/ssl/certs/51391683.0
	I1229 07:49:48.738361  913056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:49:48.742990  913056 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:49:48.743095  913056 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-122604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-122604 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:49:48.743208  913056 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:49:48.743297  913056 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:49:48.777301  913056 cri.go:96] found id: ""
	I1229 07:49:48.777422  913056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:49:48.785633  913056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:49:48.793430  913056 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:49:48.793529  913056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:49:48.801370  913056 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:49:48.801391  913056 kubeadm.go:158] found existing configuration files:
	
	I1229 07:49:48.801452  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:49:48.808885  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:49:48.808995  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:49:48.816114  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:49:48.823981  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:49:48.824054  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:49:48.831607  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:49:48.839611  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:49:48.839682  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:49:48.847644  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:49:48.855825  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:49:48.855901  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:49:48.863605  913056 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:49:48.902235  913056 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:49:48.902300  913056 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:49:49.010225  913056 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:49:49.010308  913056 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:49:49.010350  913056 kubeadm.go:319] OS: Linux
	I1229 07:49:49.010406  913056 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:49:49.010460  913056 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:49:49.010511  913056 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:49:49.010563  913056 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:49:49.010614  913056 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:49:49.010666  913056 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:49:49.010714  913056 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:49:49.010765  913056 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:49:49.010814  913056 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:49:49.084691  913056 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:49:49.084909  913056 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:49:49.085048  913056 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:49:49.093287  913056 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:49:49.099851  913056 out.go:252]   - Generating certificates and keys ...
	I1229 07:49:49.099943  913056 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:49:49.100020  913056 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:49:49.290759  913056 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:49:49.601968  913056 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:49:49.748913  913056 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:49:50.030390  913056 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:49:50.398140  913056 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:49:50.398583  913056 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-122604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:49:50.566597  913056 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:49:50.567117  913056 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-122604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:49:50.731085  913056 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:49:51.038703  913056 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:49:51.307597  913056 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:49:51.307961  913056 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:49:51.600251  913056 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:49:51.661463  913056 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:49:51.780528  913056 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:49:51.918077  913056 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:49:52.030154  913056 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:49:52.030922  913056 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:49:52.033728  913056 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:49:52.037272  913056 out.go:252]   - Booting up control plane ...
	I1229 07:49:52.037376  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:49:52.037450  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:49:52.039203  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:49:52.055715  913056 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:49:52.055963  913056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:49:52.064098  913056 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:49:52.064606  913056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:49:52.064715  913056 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:49:52.200744  913056 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:49:52.200998  913056 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:51:01.369265  893127 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000750038s
	I1229 07:51:01.369294  893127 kubeadm.go:319] 
	I1229 07:51:01.369351  893127 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:51:01.369392  893127 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:51:01.369504  893127 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:51:01.369510  893127 kubeadm.go:319] 
	I1229 07:51:01.369621  893127 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:51:01.369654  893127 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:51:01.369685  893127 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:51:01.369689  893127 kubeadm.go:319] 
	I1229 07:51:01.374417  893127 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:51:01.375145  893127 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:51:01.375292  893127 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:51:01.375628  893127 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1229 07:51:01.375662  893127 kubeadm.go:319] 
	I1229 07:51:01.375797  893127 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:51:01.375835  893127 kubeadm.go:403] duration metric: took 8m7.451583298s to StartCluster
	I1229 07:51:01.375876  893127 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:51:01.375949  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:51:01.403734  893127 cri.go:96] found id: ""
	I1229 07:51:01.403768  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.403778  893127 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:51:01.403785  893127 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:51:01.403840  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:51:01.434138  893127 cri.go:96] found id: ""
	I1229 07:51:01.434164  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.434173  893127 logs.go:284] No container was found matching "etcd"
	I1229 07:51:01.434179  893127 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:51:01.434241  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:51:01.467277  893127 cri.go:96] found id: ""
	I1229 07:51:01.467304  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.467313  893127 logs.go:284] No container was found matching "coredns"
	I1229 07:51:01.467319  893127 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:51:01.467378  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:51:01.504133  893127 cri.go:96] found id: ""
	I1229 07:51:01.504161  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.504170  893127 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:51:01.504177  893127 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:51:01.504237  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:51:01.534862  893127 cri.go:96] found id: ""
	I1229 07:51:01.534891  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.534900  893127 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:51:01.534906  893127 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:51:01.534974  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:51:01.559819  893127 cri.go:96] found id: ""
	I1229 07:51:01.559845  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.559854  893127 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:51:01.559861  893127 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:51:01.559918  893127 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:51:01.586175  893127 cri.go:96] found id: ""
	I1229 07:51:01.586213  893127 logs.go:282] 0 containers: []
	W1229 07:51:01.586222  893127 logs.go:284] No container was found matching "kindnet"
	I1229 07:51:01.586232  893127 logs.go:123] Gathering logs for kubelet ...
	I1229 07:51:01.586245  893127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:51:01.653160  893127 logs.go:123] Gathering logs for dmesg ...
	I1229 07:51:01.653199  893127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:51:01.671329  893127 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:51:01.671358  893127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:51:01.737428  893127 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:51:01.729107    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.729834    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.731327    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.731800    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.733246    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1229 07:51:01.729107    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.729834    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.731327    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.731800    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:01.733246    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:51:01.737450  893127 logs.go:123] Gathering logs for CRI-O ...
	I1229 07:51:01.737463  893127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1229 07:51:01.769484  893127 logs.go:123] Gathering logs for container status ...
	I1229 07:51:01.769519  893127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1229 07:51:01.798513  893127 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000750038s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:51:01.798575  893127 out.go:285] * 
	W1229 07:51:01.798643  893127 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000750038s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:51:01.799007  893127 out.go:285] * 
	W1229 07:51:01.799471  893127 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:51:01.806278  893127 out.go:203] 
	W1229 07:51:01.808995  893127 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000750038s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:51:01.809264  893127 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1229 07:51:01.809337  893127 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1229 07:51:01.812564  893127 out.go:203] 
	
	
	==> CRI-O <==
	Dec 29 07:42:51 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:51.769961462Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 29 07:42:51 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:51.769999018Z" level=info msg="Starting seccomp notifier watcher"
	Dec 29 07:42:51 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:51.770051613Z" level=info msg="Create NRI interface"
	Dec 29 07:42:51 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:51.770146203Z" level=info msg="built-in NRI default validator is disabled"
	Dec 29 07:42:51 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:51.770154876Z" level=info msg="runtime interface created"
	Dec 29 07:42:51 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:51.770166708Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 29 07:42:51 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:51.770172599Z" level=info msg="runtime interface starting up..."
	Dec 29 07:42:51 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:51.770178704Z" level=info msg="starting plugins..."
	Dec 29 07:42:51 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:51.770190642Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 29 07:42:51 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:51.770245806Z" level=info msg="No systemd watchdog enabled"
	Dec 29 07:42:51 force-systemd-env-002562 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 29 07:42:54 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:54.377528829Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=9d5589c2-5500-45b4-876e-388f1e3e5463 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:42:54 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:54.37837043Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=26ea391e-8a90-46df-bb9e-f5a2f977924b name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:42:54 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:54.378991598Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=d7e4b52e-9e58-47a4-b606-e2674569bc35 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:42:54 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:54.379543621Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=35278344-c7e1-4979-be3b-e9115a636ec9 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:42:54 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:54.380072103Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=e51a08ab-3f4c-444a-9925-ee835d518c19 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:42:54 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:54.380609512Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4c357639-8747-4fbb-8ecd-ce1e07bc2b5c name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:42:54 force-systemd-env-002562 crio[843]: time="2025-12-29T07:42:54.381229162Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=a345d5bd-5607-4d82-bdd0-028145fee32d name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:46:59 force-systemd-env-002562 crio[843]: time="2025-12-29T07:46:59.876273342Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=2f7f7fdb-69bd-4ec8-8a48-6cbb49b9d27d name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:46:59 force-systemd-env-002562 crio[843]: time="2025-12-29T07:46:59.877110299Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=a6342cb0-36e7-465a-9e34-0aec603b922f name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:46:59 force-systemd-env-002562 crio[843]: time="2025-12-29T07:46:59.877598624Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=9975b5da-2150-4ad8-ab4f-d46684ebdb7b name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:46:59 force-systemd-env-002562 crio[843]: time="2025-12-29T07:46:59.878043995Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=857f57e2-7deb-48cb-ad8f-15f8959dabbc name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:46:59 force-systemd-env-002562 crio[843]: time="2025-12-29T07:46:59.878510823Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=48becb3c-3189-4b60-b01d-0ed5f74e87ce name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:46:59 force-systemd-env-002562 crio[843]: time="2025-12-29T07:46:59.878930643Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=6d86eca9-2c50-4622-aa2e-176267b2992f name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:46:59 force-systemd-env-002562 crio[843]: time="2025-12-29T07:46:59.879418378Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=a841cc99-c446-4e20-bcb1-62915f8abced name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:51:02.919317    5020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:02.920089    5020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:02.921567    5020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:02.922032    5020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:51:02.923492    5020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +38.497546] overlayfs: idmapped layers are currently not supported
	[Dec29 07:23] overlayfs: idmapped layers are currently not supported
	[  +3.471993] overlayfs: idmapped layers are currently not supported
	[Dec29 07:24] overlayfs: idmapped layers are currently not supported
	[Dec29 07:25] overlayfs: idmapped layers are currently not supported
	[Dec29 07:26] overlayfs: idmapped layers are currently not supported
	[Dec29 07:29] overlayfs: idmapped layers are currently not supported
	[Dec29 07:30] overlayfs: idmapped layers are currently not supported
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:51:02 up  4:33,  0 user,  load average: 1.15, 1.84, 2.41
	Linux force-systemd-env-002562 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 29 07:51:00 force-systemd-env-002562 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:51:01 force-systemd-env-002562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 29 07:51:01 force-systemd-env-002562 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:51:01 force-systemd-env-002562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:51:01 force-systemd-env-002562 kubelet[4856]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 29 07:51:01 force-systemd-env-002562 kubelet[4856]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 29 07:51:01 force-systemd-env-002562 kubelet[4856]: E1229 07:51:01.518992    4856 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:51:01 force-systemd-env-002562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:51:01 force-systemd-env-002562 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:51:02 force-systemd-env-002562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 29 07:51:02 force-systemd-env-002562 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:51:02 force-systemd-env-002562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:51:02 force-systemd-env-002562 kubelet[4936]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 29 07:51:02 force-systemd-env-002562 kubelet[4936]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 29 07:51:02 force-systemd-env-002562 kubelet[4936]: E1229 07:51:02.237829    4936 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:51:02 force-systemd-env-002562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:51:02 force-systemd-env-002562 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:51:02 force-systemd-env-002562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 29 07:51:02 force-systemd-env-002562 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:51:02 force-systemd-env-002562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:51:02 force-systemd-env-002562 kubelet[5024]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 29 07:51:02 force-systemd-env-002562 kubelet[5024]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 29 07:51:02 force-systemd-env-002562 kubelet[5024]: E1229 07:51:02.998586    5024 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:51:03 force-systemd-env-002562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:51:03 force-systemd-env-002562 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-002562 -n force-systemd-env-002562
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-002562 -n force-systemd-env-002562: exit status 6 (343.039033ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:51:03.382072  916387 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-002562" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-002562" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-002562" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-002562
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-002562: (2.031776412s)
--- FAIL: TestForceSystemdEnv (508.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-735012 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-735012 --output=json --user=testUser: exit status 80 (1.838679763s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e287306e-be52-4579-9594-e33a27dcfb27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-735012 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"81b309cc-2303-4d4f-b110-27e19c641aa7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-29T07:26:42Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"9f1cc697-b90f-47e7-bb71-9e84d16d2f97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-735012 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.84s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.98s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-735012 --output=json --user=testUser
E1229 07:26:43.291554  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-735012 --output=json --user=testUser: exit status 80 (1.983588793s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6b60cd1d-34c9-4a41-ad50-0d460c10f82c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-735012 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"5ae4f06c-e3e8-4400-8480-19cb6e6080b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-29T07:26:44Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"42819a0d-e4cb-4475-bf45-8ca303d9ba02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-735012 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.98s)

                                                
                                    
x
+
TestPause/serial/Pause (9.33s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-199444 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-199444 --alsologtostderr -v=5: exit status 80 (2.448530896s)

                                                
                                                
-- stdout --
	* Pausing node pause-199444 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:39:55.419621  873716 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:39:55.419837  873716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:39:55.419860  873716 out.go:374] Setting ErrFile to fd 2...
	I1229 07:39:55.419880  873716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:39:55.420165  873716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:39:55.420445  873716 out.go:368] Setting JSON to false
	I1229 07:39:55.420492  873716 mustload.go:66] Loading cluster: pause-199444
	I1229 07:39:55.420988  873716 config.go:182] Loaded profile config "pause-199444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:39:55.421554  873716 cli_runner.go:164] Run: docker container inspect pause-199444 --format={{.State.Status}}
	I1229 07:39:55.448894  873716 host.go:66] Checking if "pause-199444" exists ...
	I1229 07:39:55.449216  873716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:39:55.552778  873716 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-29 07:39:55.541122253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:39:55.554017  873716 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766979747-22353/minikube-v1.37.0-1766979747-22353-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766979747-22353-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-199444 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1229 07:39:55.557717  873716 out.go:179] * Pausing node pause-199444 ... 
	I1229 07:39:55.563611  873716 host.go:66] Checking if "pause-199444" exists ...
	I1229 07:39:55.563957  873716 ssh_runner.go:195] Run: systemctl --version
	I1229 07:39:55.564014  873716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-199444
	I1229 07:39:55.586604  873716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/pause-199444/id_rsa Username:docker}
	I1229 07:39:55.695931  873716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:39:55.709172  873716 pause.go:52] kubelet running: true
	I1229 07:39:55.709241  873716 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:39:55.956402  873716 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:39:55.956494  873716 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:39:56.041404  873716 cri.go:96] found id: "e60680b8c59f9f0cdc28062e8f0100473d54a7f8e0dd3498da8e0e485e2b1af3"
	I1229 07:39:56.041468  873716 cri.go:96] found id: "3576f4eae5c506a21c79802f263df1c18c2d4c377563be9a284fc12ec7165272"
	I1229 07:39:56.041497  873716 cri.go:96] found id: "d8fb8a5d7e7c982237067aa6ca2784f4526f1e7902940d253249fe6da49f3c99"
	I1229 07:39:56.041514  873716 cri.go:96] found id: "03d69afbac915ecaa7aa23324b9dc7e6d51c6931c9f2072498fd818d409f7658"
	I1229 07:39:56.041535  873716 cri.go:96] found id: "14a2cd392b80e34aa9627f104907fb18c7d7c86a7252093b1c9179bc59c7cc58"
	I1229 07:39:56.041574  873716 cri.go:96] found id: "15086f7244d65b56329814193b4549ee1080b3bc8810f90f2dd6db1f43fe3ac3"
	I1229 07:39:56.041594  873716 cri.go:96] found id: "2bd65abc5d48c1e3a0fe601d1b60601ced20d3e6a4564fa69c349bf8a0a1e1eb"
	I1229 07:39:56.041615  873716 cri.go:96] found id: "de742ed40b6505777453ab7944380979c7b0c8ba97e9f0a3357a7a6974c325a0"
	I1229 07:39:56.041636  873716 cri.go:96] found id: "1044538f90cb727baf9a757bac77301f44852ad91704ddc8ca63cd3258db41f7"
	I1229 07:39:56.041673  873716 cri.go:96] found id: "c4bead8515e4b5fe06cb4b30d6662a72f285653c014d4fa9ddad118a4a3bd866"
	I1229 07:39:56.041691  873716 cri.go:96] found id: "52c5419dcaef952ed0fbcdfc375a4dbfb6df0de2bfa13b1ab22c2eea4c5a2f67"
	I1229 07:39:56.041710  873716 cri.go:96] found id: "4d3b2913ce4c31183bfab38b2d955071916f1f9ccdab93f542f4bb6134e9028a"
	I1229 07:39:56.041729  873716 cri.go:96] found id: "f726d99564778479db4e619c295506ce8718cf7a9e044557c636e62a9921968b"
	I1229 07:39:56.041760  873716 cri.go:96] found id: "93170bc14e08075a82aa9b1cabcc864a8794415ed924f510a56c2eba299de2b9"
	I1229 07:39:56.041779  873716 cri.go:96] found id: ""
	I1229 07:39:56.041857  873716 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:39:56.054042  873716 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:39:56Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:39:56.351383  873716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:39:56.365355  873716 pause.go:52] kubelet running: false
	I1229 07:39:56.365467  873716 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:39:56.534647  873716 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:39:56.534807  873716 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:39:56.615051  873716 cri.go:96] found id: "e60680b8c59f9f0cdc28062e8f0100473d54a7f8e0dd3498da8e0e485e2b1af3"
	I1229 07:39:56.615123  873716 cri.go:96] found id: "3576f4eae5c506a21c79802f263df1c18c2d4c377563be9a284fc12ec7165272"
	I1229 07:39:56.615151  873716 cri.go:96] found id: "d8fb8a5d7e7c982237067aa6ca2784f4526f1e7902940d253249fe6da49f3c99"
	I1229 07:39:56.615169  873716 cri.go:96] found id: "03d69afbac915ecaa7aa23324b9dc7e6d51c6931c9f2072498fd818d409f7658"
	I1229 07:39:56.615189  873716 cri.go:96] found id: "14a2cd392b80e34aa9627f104907fb18c7d7c86a7252093b1c9179bc59c7cc58"
	I1229 07:39:56.615228  873716 cri.go:96] found id: "15086f7244d65b56329814193b4549ee1080b3bc8810f90f2dd6db1f43fe3ac3"
	I1229 07:39:56.615245  873716 cri.go:96] found id: "2bd65abc5d48c1e3a0fe601d1b60601ced20d3e6a4564fa69c349bf8a0a1e1eb"
	I1229 07:39:56.615262  873716 cri.go:96] found id: "de742ed40b6505777453ab7944380979c7b0c8ba97e9f0a3357a7a6974c325a0"
	I1229 07:39:56.615282  873716 cri.go:96] found id: "1044538f90cb727baf9a757bac77301f44852ad91704ddc8ca63cd3258db41f7"
	I1229 07:39:56.615319  873716 cri.go:96] found id: "c4bead8515e4b5fe06cb4b30d6662a72f285653c014d4fa9ddad118a4a3bd866"
	I1229 07:39:56.615338  873716 cri.go:96] found id: "52c5419dcaef952ed0fbcdfc375a4dbfb6df0de2bfa13b1ab22c2eea4c5a2f67"
	I1229 07:39:56.615357  873716 cri.go:96] found id: "4d3b2913ce4c31183bfab38b2d955071916f1f9ccdab93f542f4bb6134e9028a"
	I1229 07:39:56.615376  873716 cri.go:96] found id: "f726d99564778479db4e619c295506ce8718cf7a9e044557c636e62a9921968b"
	I1229 07:39:56.615409  873716 cri.go:96] found id: "93170bc14e08075a82aa9b1cabcc864a8794415ed924f510a56c2eba299de2b9"
	I1229 07:39:56.615427  873716 cri.go:96] found id: ""
	I1229 07:39:56.615501  873716 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:39:56.829099  873716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:39:56.844089  873716 pause.go:52] kubelet running: false
	I1229 07:39:56.844215  873716 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:39:57.015543  873716 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:39:57.015720  873716 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:39:57.088619  873716 cri.go:96] found id: "e60680b8c59f9f0cdc28062e8f0100473d54a7f8e0dd3498da8e0e485e2b1af3"
	I1229 07:39:57.088683  873716 cri.go:96] found id: "3576f4eae5c506a21c79802f263df1c18c2d4c377563be9a284fc12ec7165272"
	I1229 07:39:57.088712  873716 cri.go:96] found id: "d8fb8a5d7e7c982237067aa6ca2784f4526f1e7902940d253249fe6da49f3c99"
	I1229 07:39:57.088731  873716 cri.go:96] found id: "03d69afbac915ecaa7aa23324b9dc7e6d51c6931c9f2072498fd818d409f7658"
	I1229 07:39:57.088751  873716 cri.go:96] found id: "14a2cd392b80e34aa9627f104907fb18c7d7c86a7252093b1c9179bc59c7cc58"
	I1229 07:39:57.088791  873716 cri.go:96] found id: "15086f7244d65b56329814193b4549ee1080b3bc8810f90f2dd6db1f43fe3ac3"
	I1229 07:39:57.088815  873716 cri.go:96] found id: "2bd65abc5d48c1e3a0fe601d1b60601ced20d3e6a4564fa69c349bf8a0a1e1eb"
	I1229 07:39:57.088835  873716 cri.go:96] found id: "de742ed40b6505777453ab7944380979c7b0c8ba97e9f0a3357a7a6974c325a0"
	I1229 07:39:57.088865  873716 cri.go:96] found id: "1044538f90cb727baf9a757bac77301f44852ad91704ddc8ca63cd3258db41f7"
	I1229 07:39:57.088887  873716 cri.go:96] found id: "c4bead8515e4b5fe06cb4b30d6662a72f285653c014d4fa9ddad118a4a3bd866"
	I1229 07:39:57.088908  873716 cri.go:96] found id: "52c5419dcaef952ed0fbcdfc375a4dbfb6df0de2bfa13b1ab22c2eea4c5a2f67"
	I1229 07:39:57.088928  873716 cri.go:96] found id: "4d3b2913ce4c31183bfab38b2d955071916f1f9ccdab93f542f4bb6134e9028a"
	I1229 07:39:57.088953  873716 cri.go:96] found id: "f726d99564778479db4e619c295506ce8718cf7a9e044557c636e62a9921968b"
	I1229 07:39:57.088970  873716 cri.go:96] found id: "93170bc14e08075a82aa9b1cabcc864a8794415ed924f510a56c2eba299de2b9"
	I1229 07:39:57.088993  873716 cri.go:96] found id: ""
	I1229 07:39:57.089072  873716 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:39:57.426253  873716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:39:57.443863  873716 pause.go:52] kubelet running: false
	I1229 07:39:57.444025  873716 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:39:57.652902  873716 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:39:57.653064  873716 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:39:57.757536  873716 cri.go:96] found id: "e60680b8c59f9f0cdc28062e8f0100473d54a7f8e0dd3498da8e0e485e2b1af3"
	I1229 07:39:57.757615  873716 cri.go:96] found id: "3576f4eae5c506a21c79802f263df1c18c2d4c377563be9a284fc12ec7165272"
	I1229 07:39:57.757714  873716 cri.go:96] found id: "d8fb8a5d7e7c982237067aa6ca2784f4526f1e7902940d253249fe6da49f3c99"
	I1229 07:39:57.757750  873716 cri.go:96] found id: "03d69afbac915ecaa7aa23324b9dc7e6d51c6931c9f2072498fd818d409f7658"
	I1229 07:39:57.757769  873716 cri.go:96] found id: "14a2cd392b80e34aa9627f104907fb18c7d7c86a7252093b1c9179bc59c7cc58"
	I1229 07:39:57.757805  873716 cri.go:96] found id: "15086f7244d65b56329814193b4549ee1080b3bc8810f90f2dd6db1f43fe3ac3"
	I1229 07:39:57.757828  873716 cri.go:96] found id: "2bd65abc5d48c1e3a0fe601d1b60601ced20d3e6a4564fa69c349bf8a0a1e1eb"
	I1229 07:39:57.757848  873716 cri.go:96] found id: "de742ed40b6505777453ab7944380979c7b0c8ba97e9f0a3357a7a6974c325a0"
	I1229 07:39:57.757868  873716 cri.go:96] found id: "1044538f90cb727baf9a757bac77301f44852ad91704ddc8ca63cd3258db41f7"
	I1229 07:39:57.757905  873716 cri.go:96] found id: "c4bead8515e4b5fe06cb4b30d6662a72f285653c014d4fa9ddad118a4a3bd866"
	I1229 07:39:57.757921  873716 cri.go:96] found id: "52c5419dcaef952ed0fbcdfc375a4dbfb6df0de2bfa13b1ab22c2eea4c5a2f67"
	I1229 07:39:57.757940  873716 cri.go:96] found id: "4d3b2913ce4c31183bfab38b2d955071916f1f9ccdab93f542f4bb6134e9028a"
	I1229 07:39:57.757971  873716 cri.go:96] found id: "f726d99564778479db4e619c295506ce8718cf7a9e044557c636e62a9921968b"
	I1229 07:39:57.757995  873716 cri.go:96] found id: "93170bc14e08075a82aa9b1cabcc864a8794415ed924f510a56c2eba299de2b9"
	I1229 07:39:57.758014  873716 cri.go:96] found id: ""
	I1229 07:39:57.758097  873716 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:39:57.775134  873716 out.go:203] 
	W1229 07:39:57.778248  873716 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:39:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:39:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:39:57.778460  873716 out.go:285] * 
	* 
	W1229 07:39:57.782538  873716 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:39:57.785426  873716 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-199444 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-199444
helpers_test.go:244: (dbg) docker inspect pause-199444:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348",
	        "Created": "2025-12-29T07:38:27.961757823Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 865835,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:38:28.963264045Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348/hosts",
	        "LogPath": "/var/lib/docker/containers/6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348/6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348-json.log",
	        "Name": "/pause-199444",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-199444:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-199444",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348",
	                "LowerDir": "/var/lib/docker/overlay2/ad52fe2cdfa7462b46351ada8243f608a908966fa9ef5c46c4d267859b46520d-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad52fe2cdfa7462b46351ada8243f608a908966fa9ef5c46c4d267859b46520d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad52fe2cdfa7462b46351ada8243f608a908966fa9ef5c46c4d267859b46520d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad52fe2cdfa7462b46351ada8243f608a908966fa9ef5c46c4d267859b46520d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-199444",
	                "Source": "/var/lib/docker/volumes/pause-199444/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-199444",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-199444",
	                "name.minikube.sigs.k8s.io": "pause-199444",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8d502a99e58f7a67d3458820214d19b2a6887cebd2173f56593c8b7aa875522b",
	            "SandboxKey": "/var/run/docker/netns/8d502a99e58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33728"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33729"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33732"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33730"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33731"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-199444": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:67:10:1f:e9:67",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7c9898f8f3702d07027fc90360666319a7e172f32a08540f329560eafd37e558",
	                    "EndpointID": "5b9d70025df59fd0a52ef924ef39e4d68d9ef6361bb94d936bdefac399382f08",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-199444",
	                        "6c18e532cae3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-199444 -n pause-199444
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-199444 -n pause-199444: exit status 2 (429.501584ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-199444 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-199444 logs -n 25: (1.764499708s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-272744                                                                                         │ multinode-272744            │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │                     │
	│ start   │ -p multinode-272744-m02 --driver=docker  --container-runtime=crio                                                │ multinode-272744-m02        │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │                     │
	│ start   │ -p multinode-272744-m03 --driver=docker  --container-runtime=crio                                                │ multinode-272744-m03        │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │ 29 Dec 25 07:36 UTC │
	│ node    │ add -p multinode-272744                                                                                          │ multinode-272744            │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ delete  │ -p multinode-272744-m03                                                                                          │ multinode-272744-m03        │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
	│ delete  │ -p multinode-272744                                                                                              │ multinode-272744            │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
	│ start   │ -p scheduled-stop-301780 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
	│ stop    │ -p scheduled-stop-301780 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --cancel-scheduled                                                                      │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
	│ stop    │ -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │ 29 Dec 25 07:37 UTC │
	│ delete  │ -p scheduled-stop-301780                                                                                         │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:38 UTC │ 29 Dec 25 07:38 UTC │
	│ start   │ -p insufficient-storage-190289 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-190289 │ jenkins │ v1.37.0 │ 29 Dec 25 07:38 UTC │                     │
	│ delete  │ -p insufficient-storage-190289                                                                                   │ insufficient-storage-190289 │ jenkins │ v1.37.0 │ 29 Dec 25 07:38 UTC │ 29 Dec 25 07:38 UTC │
	│ start   │ -p pause-199444 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-199444                │ jenkins │ v1.37.0 │ 29 Dec 25 07:38 UTC │ 29 Dec 25 07:39 UTC │
	│ start   │ -p missing-upgrade-865040 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-865040      │ jenkins │ v1.35.0 │ 29 Dec 25 07:38 UTC │ 29 Dec 25 07:39 UTC │
	│ start   │ -p pause-199444 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-199444                │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
	│ start   │ -p missing-upgrade-865040 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio         │ missing-upgrade-865040      │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │                     │
	│ pause   │ -p pause-199444 --alsologtostderr -v=5                                                                           │ pause-199444                │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:39:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:39:21.285263  870553 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:39:21.285377  870553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:39:21.285389  870553 out.go:374] Setting ErrFile to fd 2...
	I1229 07:39:21.285395  870553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:39:21.285660  870553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:39:21.286029  870553 out.go:368] Setting JSON to false
	I1229 07:39:21.286901  870553 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15713,"bootTime":1766978248,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:39:21.286967  870553 start.go:143] virtualization:  
	I1229 07:39:21.291815  870553 out.go:179] * [missing-upgrade-865040] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:39:21.295509  870553 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:39:21.295601  870553 notify.go:221] Checking for updates...
	I1229 07:39:21.301310  870553 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:39:21.304274  870553 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:39:21.307667  870553 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:39:21.310449  870553 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:39:21.313296  870553 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:39:21.316676  870553 config.go:182] Loaded profile config "missing-upgrade-865040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1229 07:39:21.320229  870553 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1229 07:39:21.323051  870553 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:39:21.345918  870553 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:39:21.346041  870553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:39:21.405563  870553 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:39:21.396252553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:39:21.405668  870553 docker.go:319] overlay module found
	I1229 07:39:21.408707  870553 out.go:179] * Using the docker driver based on existing profile
	I1229 07:39:21.411569  870553 start.go:309] selected driver: docker
	I1229 07:39:21.411588  870553 start.go:928] validating driver "docker" against &{Name:missing-upgrade-865040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-865040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:39:21.411696  870553 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:39:21.412434  870553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:39:21.468326  870553 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:39:21.45961504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:39:21.468630  870553 cni.go:84] Creating CNI manager for ""
	I1229 07:39:21.468708  870553 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:39:21.468755  870553 start.go:353] cluster config:
	{Name:missing-upgrade-865040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-865040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:39:21.472091  870553 out.go:179] * Starting "missing-upgrade-865040" primary control-plane node in "missing-upgrade-865040" cluster
	I1229 07:39:21.474912  870553 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:39:21.479735  870553 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:39:21.482563  870553 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1229 07:39:21.482608  870553 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:39:21.482631  870553 cache.go:65] Caching tarball of preloaded images
	I1229 07:39:21.482667  870553 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1229 07:39:21.482733  870553 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:39:21.482744  870553 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1229 07:39:21.482841  870553 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/config.json ...
	I1229 07:39:21.503174  870553 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1229 07:39:21.503199  870553 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1229 07:39:21.503216  870553 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:39:21.503248  870553 start.go:360] acquireMachinesLock for missing-upgrade-865040: {Name:mka6fa0178c2cd1d84f888ae65ca310ceb8ae124 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:39:21.503311  870553 start.go:364] duration metric: took 40.214µs to acquireMachinesLock for "missing-upgrade-865040"
	I1229 07:39:21.503332  870553 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:39:21.503340  870553 fix.go:54] fixHost starting: 
	I1229 07:39:21.503624  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:21.520838  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:21.520920  870553 fix.go:112] recreateIfNeeded on missing-upgrade-865040: state= err=unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:21.520981  870553 fix.go:117] machineExists: false. err=machine does not exist
	I1229 07:39:21.524370  870553 out.go:179] * docker "missing-upgrade-865040" container is missing, will recreate.
	I1229 07:39:23.317349  870002 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:39:23.317383  870002 machine.go:97] duration metric: took 6.871154537s to provisionDockerMachine
	I1229 07:39:23.317395  870002 start.go:293] postStartSetup for "pause-199444" (driver="docker")
	I1229 07:39:23.317411  870002 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:39:23.317492  870002 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:39:23.317540  870002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-199444
	I1229 07:39:23.338302  870002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/pause-199444/id_rsa Username:docker}
	I1229 07:39:23.445131  870002 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:39:23.448885  870002 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:39:23.448918  870002 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:39:23.448930  870002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:39:23.449013  870002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:39:23.449149  870002 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:39:23.449295  870002 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:39:23.456874  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:39:23.473659  870002 start.go:296] duration metric: took 156.247989ms for postStartSetup
	I1229 07:39:23.473750  870002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:39:23.473812  870002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-199444
	I1229 07:39:23.490418  870002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/pause-199444/id_rsa Username:docker}
	I1229 07:39:23.595154  870002 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:39:23.600302  870002 fix.go:56] duration metric: took 7.183200804s for fixHost
	I1229 07:39:23.600339  870002 start.go:83] releasing machines lock for "pause-199444", held for 7.18326442s
	I1229 07:39:23.600439  870002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-199444
	I1229 07:39:23.618998  870002 ssh_runner.go:195] Run: cat /version.json
	I1229 07:39:23.619057  870002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-199444
	I1229 07:39:23.619344  870002 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:39:23.619406  870002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-199444
	I1229 07:39:23.654216  870002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/pause-199444/id_rsa Username:docker}
	I1229 07:39:23.662523  870002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/pause-199444/id_rsa Username:docker}
	I1229 07:39:23.868635  870002 ssh_runner.go:195] Run: systemctl --version
	I1229 07:39:23.875281  870002 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:39:23.934339  870002 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:39:23.939097  870002 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:39:23.939185  870002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:39:23.948507  870002 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:39:23.948533  870002 start.go:496] detecting cgroup driver to use...
	I1229 07:39:23.948566  870002 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:39:23.948613  870002 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:39:23.964582  870002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:39:23.978282  870002 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:39:23.978343  870002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:39:23.993997  870002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:39:24.009620  870002 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:39:24.150793  870002 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:39:24.275294  870002 docker.go:234] disabling docker service ...
	I1229 07:39:24.275403  870002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:39:24.290576  870002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:39:24.303547  870002 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:39:24.440012  870002 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:39:24.576980  870002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:39:24.590702  870002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:39:24.604295  870002 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:39:24.604392  870002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.613246  870002 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:39:24.613323  870002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.622697  870002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.631653  870002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.640363  870002 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:39:24.648659  870002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.657483  870002 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.665864  870002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.674790  870002 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:39:24.682090  870002 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:39:24.689650  870002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:39:24.814690  870002 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:39:21.527339  870553 delete.go:124] DEMOLISHING missing-upgrade-865040 ...
	I1229 07:39:21.527473  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:21.542678  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	W1229 07:39:21.542742  870553 stop.go:83] unable to get state: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:21.542765  870553 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:21.543231  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:21.559157  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:21.559225  870553 delete.go:82] Unable to get host status for missing-upgrade-865040, assuming it has already been deleted: state: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:21.559295  870553 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-865040
	W1229 07:39:21.573661  870553 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-865040 returned with exit code 1
	I1229 07:39:21.573718  870553 kic.go:371] could not find the container missing-upgrade-865040 to remove it. will try anyways
	I1229 07:39:21.573779  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:21.588521  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	W1229 07:39:21.588587  870553 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:21.588662  870553 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-865040 /bin/bash -c "sudo init 0"
	W1229 07:39:21.604078  870553 cli_runner.go:211] docker exec --privileged -t missing-upgrade-865040 /bin/bash -c "sudo init 0" returned with exit code 1
	I1229 07:39:21.604113  870553 oci.go:659] error shutdown missing-upgrade-865040: docker exec --privileged -t missing-upgrade-865040 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:22.604285  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:22.621379  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:22.621444  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:22.621458  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:22.621500  870553 retry.go:84] will retry after 700ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:23.286007  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:23.305978  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:23.306038  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:23.306048  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:23.705979  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:23.725443  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:23.725529  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:23.725545  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:25.182214  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:25.199370  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:25.199434  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:25.199449  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:26.162961  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:26.179222  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:26.179284  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:26.179309  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:29.842308  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:29.859429  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:29.859514  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:29.859524  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:29.859564  870553 retry.go:84] will retry after 3.7s: couldn't verify container is exited. %v: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:33.338732  870002 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.524006273s)
	I1229 07:39:33.338758  870002 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:39:33.338808  870002 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:39:33.342695  870002 start.go:574] Will wait 60s for crictl version
	I1229 07:39:33.342767  870002 ssh_runner.go:195] Run: which crictl
	I1229 07:39:33.346186  870002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:39:33.374244  870002 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:39:33.374336  870002 ssh_runner.go:195] Run: crio --version
	I1229 07:39:33.401553  870002 ssh_runner.go:195] Run: crio --version
	I1229 07:39:33.434965  870002 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:39:33.437982  870002 cli_runner.go:164] Run: docker network inspect pause-199444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:39:33.454103  870002 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1229 07:39:33.458108  870002 kubeadm.go:884] updating cluster {Name:pause-199444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-199444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:39:33.458243  870002 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:39:33.458332  870002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:39:33.493350  870002 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:39:33.493373  870002 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:39:33.493424  870002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:39:33.520839  870002 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:39:33.520867  870002 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:39:33.520875  870002 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1229 07:39:33.520969  870002 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-199444 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-199444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:39:33.521052  870002 ssh_runner.go:195] Run: crio config
	I1229 07:39:33.600745  870002 cni.go:84] Creating CNI manager for ""
	I1229 07:39:33.600768  870002 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:39:33.600831  870002 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:39:33.600864  870002 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-199444 NodeName:pause-199444 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:39:33.601040  870002 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-199444"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:39:33.601139  870002 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:39:33.610505  870002 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:39:33.610600  870002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:39:33.618485  870002 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1229 07:39:33.632019  870002 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:39:33.645279  870002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1229 07:39:33.659366  870002 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:39:33.663359  870002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:39:33.792849  870002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:39:33.806203  870002 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444 for IP: 192.168.76.2
	I1229 07:39:33.806268  870002 certs.go:195] generating shared ca certs ...
	I1229 07:39:33.806299  870002 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:33.806465  870002 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:39:33.806544  870002 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:39:33.806576  870002 certs.go:257] generating profile certs ...
	I1229 07:39:33.806692  870002 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/client.key
	I1229 07:39:33.806789  870002 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/apiserver.key.e4e09964
	I1229 07:39:33.806853  870002 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/proxy-client.key
	I1229 07:39:33.807002  870002 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:39:33.807064  870002 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:39:33.807098  870002 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:39:33.807148  870002 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:39:33.807207  870002 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:39:33.807256  870002 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:39:33.807333  870002 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:39:33.808222  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:39:33.832887  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:39:33.851200  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:39:33.870542  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:39:33.889475  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1229 07:39:33.908027  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1229 07:39:33.926155  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:39:33.944368  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:39:33.962886  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:39:33.981645  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:39:33.999208  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:39:34.020533  870002 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:39:34.036405  870002 ssh_runner.go:195] Run: openssl version
	I1229 07:39:34.045032  870002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:39:34.053973  870002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:39:34.063212  870002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:39:34.067446  870002 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:39:34.067616  870002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:39:34.109684  870002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:39:34.117446  870002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:34.125346  870002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:39:34.133014  870002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:34.137250  870002 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:34.137362  870002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:34.180066  870002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:39:34.187778  870002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:39:34.195001  870002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:39:34.202575  870002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:39:34.206455  870002 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:39:34.206531  870002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:39:34.247913  870002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:39:34.255552  870002 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:39:34.259430  870002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:39:34.300832  870002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:39:34.341959  870002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:39:34.383019  870002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:39:34.426262  870002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:39:34.467728  870002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:39:34.511168  870002 kubeadm.go:401] StartCluster: {Name:pause-199444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-199444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:39:34.511292  870002 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:39:34.511402  870002 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:39:34.541481  870002 cri.go:96] found id: "de742ed40b6505777453ab7944380979c7b0c8ba97e9f0a3357a7a6974c325a0"
	I1229 07:39:34.541556  870002 cri.go:96] found id: "1044538f90cb727baf9a757bac77301f44852ad91704ddc8ca63cd3258db41f7"
	I1229 07:39:34.541579  870002 cri.go:96] found id: "c4bead8515e4b5fe06cb4b30d6662a72f285653c014d4fa9ddad118a4a3bd866"
	I1229 07:39:34.541592  870002 cri.go:96] found id: "52c5419dcaef952ed0fbcdfc375a4dbfb6df0de2bfa13b1ab22c2eea4c5a2f67"
	I1229 07:39:34.541597  870002 cri.go:96] found id: "4d3b2913ce4c31183bfab38b2d955071916f1f9ccdab93f542f4bb6134e9028a"
	I1229 07:39:34.541601  870002 cri.go:96] found id: "f726d99564778479db4e619c295506ce8718cf7a9e044557c636e62a9921968b"
	I1229 07:39:34.541610  870002 cri.go:96] found id: "93170bc14e08075a82aa9b1cabcc864a8794415ed924f510a56c2eba299de2b9"
	I1229 07:39:34.541639  870002 cri.go:96] found id: ""
	I1229 07:39:34.541705  870002 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:39:34.561845  870002 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:39:34Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:39:34.561976  870002 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:39:34.569864  870002 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:39:34.569883  870002 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:39:34.569935  870002 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:39:34.577400  870002 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:39:34.578097  870002 kubeconfig.go:125] found "pause-199444" server: "https://192.168.76.2:8443"
	I1229 07:39:34.578980  870002 kapi.go:59] client config for pause-199444: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/client.key", CAFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:39:34.579500  870002 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1229 07:39:34.579520  870002 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1229 07:39:34.579525  870002 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1229 07:39:34.579530  870002 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1229 07:39:34.579537  870002 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1229 07:39:34.579541  870002 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1229 07:39:34.579903  870002 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:39:34.589026  870002 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1229 07:39:34.589103  870002 kubeadm.go:602] duration metric: took 19.213977ms to restartPrimaryControlPlane
	I1229 07:39:34.589120  870002 kubeadm.go:403] duration metric: took 77.962519ms to StartCluster
	I1229 07:39:34.589135  870002 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:34.589225  870002 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:39:34.590156  870002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:34.590397  870002 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:39:34.590689  870002 config.go:182] Loaded profile config "pause-199444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:39:34.590771  870002 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:39:34.596848  870002 out.go:179] * Enabled addons: 
	I1229 07:39:34.596848  870002 out.go:179] * Verifying Kubernetes components...
	I1229 07:39:34.599593  870002 addons.go:530] duration metric: took 8.815492ms for enable addons: enabled=[]
	I1229 07:39:34.599681  870002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:39:34.736979  870002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:39:34.750495  870002 node_ready.go:35] waiting up to 6m0s for node "pause-199444" to be "Ready" ...
	I1229 07:39:33.524939  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:33.542966  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:33.543026  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:33.543047  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:33.543080  870553 retry.go:84] will retry after 7.9s: couldn't verify container is exited. %v: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:38.309414  870002 node_ready.go:49] node "pause-199444" is "Ready"
	I1229 07:39:38.309449  870002 node_ready.go:38] duration metric: took 3.55890855s for node "pause-199444" to be "Ready" ...
	I1229 07:39:38.309463  870002 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:39:38.309521  870002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:39:38.322568  870002 api_server.go:72] duration metric: took 3.732133271s to wait for apiserver process to appear ...
	I1229 07:39:38.322596  870002 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:39:38.322615  870002 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:39:38.378890  870002 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1229 07:39:38.378919  870002 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1229 07:39:38.823496  870002 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:39:38.831828  870002 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:39:38.831869  870002 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:39:39.323441  870002 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:39:39.334382  870002 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:39:39.334411  870002 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:39:39.822736  870002 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:39:39.830904  870002 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1229 07:39:39.832249  870002 api_server.go:141] control plane version: v1.35.0
	I1229 07:39:39.832277  870002 api_server.go:131] duration metric: took 1.509674913s to wait for apiserver health ...
	I1229 07:39:39.832286  870002 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:39:39.836215  870002 system_pods.go:59] 7 kube-system pods found
	I1229 07:39:39.836253  870002 system_pods.go:61] "coredns-7d764666f9-fkkph" [ff98f4b2-06a7-4b08-b45e-ddaf75cc6ebd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:39:39.836261  870002 system_pods.go:61] "etcd-pause-199444" [6769c02a-a9e3-43c9-8075-1ed16e0331a3] Running
	I1229 07:39:39.836268  870002 system_pods.go:61] "kindnet-7kqm7" [b26d894d-b2f4-43a4-ab75-91e784b0888a] Running
	I1229 07:39:39.836274  870002 system_pods.go:61] "kube-apiserver-pause-199444" [4da3ca57-8553-4f05-8221-d1af098bdecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:39:39.836283  870002 system_pods.go:61] "kube-controller-manager-pause-199444" [b019d0dc-3c54-42ec-bceb-260028d175ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:39:39.836293  870002 system_pods.go:61] "kube-proxy-hjhq6" [878aafee-6668-4ad2-8c4c-dafd29e52775] Running
	I1229 07:39:39.836302  870002 system_pods.go:61] "kube-scheduler-pause-199444" [f9a4053a-62ac-46e4-980d-a84613506424] Running
	I1229 07:39:39.836310  870002 system_pods.go:74] duration metric: took 4.016021ms to wait for pod list to return data ...
	I1229 07:39:39.836322  870002 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:39:39.839670  870002 default_sa.go:45] found service account: "default"
	I1229 07:39:39.839699  870002 default_sa.go:55] duration metric: took 3.370746ms for default service account to be created ...
	I1229 07:39:39.839711  870002 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:39:39.842634  870002 system_pods.go:86] 7 kube-system pods found
	I1229 07:39:39.842670  870002 system_pods.go:89] "coredns-7d764666f9-fkkph" [ff98f4b2-06a7-4b08-b45e-ddaf75cc6ebd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:39:39.842679  870002 system_pods.go:89] "etcd-pause-199444" [6769c02a-a9e3-43c9-8075-1ed16e0331a3] Running
	I1229 07:39:39.842686  870002 system_pods.go:89] "kindnet-7kqm7" [b26d894d-b2f4-43a4-ab75-91e784b0888a] Running
	I1229 07:39:39.842692  870002 system_pods.go:89] "kube-apiserver-pause-199444" [4da3ca57-8553-4f05-8221-d1af098bdecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:39:39.842700  870002 system_pods.go:89] "kube-controller-manager-pause-199444" [b019d0dc-3c54-42ec-bceb-260028d175ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:39:39.842706  870002 system_pods.go:89] "kube-proxy-hjhq6" [878aafee-6668-4ad2-8c4c-dafd29e52775] Running
	I1229 07:39:39.842711  870002 system_pods.go:89] "kube-scheduler-pause-199444" [f9a4053a-62ac-46e4-980d-a84613506424] Running
	I1229 07:39:39.842717  870002 system_pods.go:126] duration metric: took 3.001437ms to wait for k8s-apps to be running ...
	I1229 07:39:39.842730  870002 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:39:39.842790  870002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:39:39.856700  870002 system_svc.go:56] duration metric: took 13.960576ms WaitForService to wait for kubelet
	I1229 07:39:39.856771  870002 kubeadm.go:587] duration metric: took 5.26634076s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:39:39.856837  870002 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:39:39.859451  870002 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:39:39.859487  870002 node_conditions.go:123] node cpu capacity is 2
	I1229 07:39:39.859501  870002 node_conditions.go:105] duration metric: took 2.645642ms to run NodePressure ...
	I1229 07:39:39.859524  870002 start.go:242] waiting for startup goroutines ...
	I1229 07:39:39.859536  870002 start.go:247] waiting for cluster config update ...
	I1229 07:39:39.859545  870002 start.go:256] writing updated cluster config ...
	I1229 07:39:39.859890  870002 ssh_runner.go:195] Run: rm -f paused
	I1229 07:39:39.863912  870002 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:39:39.864627  870002 kapi.go:59] client config for pause-199444: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/client.key", CAFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:39:39.867261  870002 pod_ready.go:83] waiting for pod "coredns-7d764666f9-fkkph" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:41.492320  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:41.514526  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:41.514600  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:41.514610  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:41.514644  870553 oci.go:88] couldn't shut down missing-upgrade-865040 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	 
	I1229 07:39:41.514706  870553 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-865040
	I1229 07:39:41.534406  870553 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-865040
	W1229 07:39:41.554747  870553 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-865040 returned with exit code 1
	I1229 07:39:41.554836  870553 cli_runner.go:164] Run: docker network inspect missing-upgrade-865040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:39:41.572095  870553 cli_runner.go:164] Run: docker network rm missing-upgrade-865040
	I1229 07:39:41.674344  870553 fix.go:124] Sleeping 1 second for extra luck!
	I1229 07:39:42.674502  870553 start.go:125] createHost starting for "" (driver="docker")
	W1229 07:39:41.872342  870002 pod_ready.go:104] pod "coredns-7d764666f9-fkkph" is not "Ready", error: <nil>
	W1229 07:39:43.872936  870002 pod_ready.go:104] pod "coredns-7d764666f9-fkkph" is not "Ready", error: <nil>
	W1229 07:39:45.874733  870002 pod_ready.go:104] pod "coredns-7d764666f9-fkkph" is not "Ready", error: <nil>
	I1229 07:39:42.677787  870553 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:39:42.677933  870553 start.go:159] libmachine.API.Create for "missing-upgrade-865040" (driver="docker")
	I1229 07:39:42.677967  870553 client.go:173] LocalClient.Create starting
	I1229 07:39:42.678055  870553 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 07:39:42.678094  870553 main.go:144] libmachine: Decoding PEM data...
	I1229 07:39:42.678119  870553 main.go:144] libmachine: Parsing certificate...
	I1229 07:39:42.678177  870553 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 07:39:42.678201  870553 main.go:144] libmachine: Decoding PEM data...
	I1229 07:39:42.678215  870553 main.go:144] libmachine: Parsing certificate...
	I1229 07:39:42.678472  870553 cli_runner.go:164] Run: docker network inspect missing-upgrade-865040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:39:42.695792  870553 cli_runner.go:211] docker network inspect missing-upgrade-865040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:39:42.695889  870553 network_create.go:284] running [docker network inspect missing-upgrade-865040] to gather additional debugging logs...
	I1229 07:39:42.695913  870553 cli_runner.go:164] Run: docker network inspect missing-upgrade-865040
	W1229 07:39:42.711230  870553 cli_runner.go:211] docker network inspect missing-upgrade-865040 returned with exit code 1
	I1229 07:39:42.711273  870553 network_create.go:287] error running [docker network inspect missing-upgrade-865040]: docker network inspect missing-upgrade-865040: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-865040 not found
	I1229 07:39:42.711291  870553 network_create.go:289] output of [docker network inspect missing-upgrade-865040]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-865040 not found
	
	** /stderr **
	I1229 07:39:42.711412  870553 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:39:42.728441  870553 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
	I1229 07:39:42.728842  870553 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce89f48ecd30 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:ee:74:81:93:1a} reservation:<nil>}
	I1229 07:39:42.729151  870553 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e2ff2b4ebfe IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:c7:24:a7:9b:5d} reservation:<nil>}
	I1229 07:39:42.729475  870553 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7c9898f8f370 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:36:2f:ca:8d:20} reservation:<nil>}
	I1229 07:39:42.729914  870553 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c27490}
	I1229 07:39:42.729935  870553 network_create.go:124] attempt to create docker network missing-upgrade-865040 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 07:39:42.729999  870553 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-865040 missing-upgrade-865040
	I1229 07:39:42.795401  870553 network_create.go:108] docker network missing-upgrade-865040 192.168.85.0/24 created
	I1229 07:39:42.795439  870553 kic.go:121] calculated static IP "192.168.85.2" for the "missing-upgrade-865040" container
	I1229 07:39:42.795514  870553 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:39:42.813649  870553 cli_runner.go:164] Run: docker volume create missing-upgrade-865040 --label name.minikube.sigs.k8s.io=missing-upgrade-865040 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:39:42.830639  870553 oci.go:103] Successfully created a docker volume missing-upgrade-865040
	I1229 07:39:42.830742  870553 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-865040-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-865040 --entrypoint /usr/bin/test -v missing-upgrade-865040:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1229 07:39:43.276301  870553 oci.go:107] Successfully prepared a docker volume missing-upgrade-865040
	I1229 07:39:43.276372  870553 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1229 07:39:43.276385  870553 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:39:43.276450  870553 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-865040:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:39:47.386801  870002 pod_ready.go:94] pod "coredns-7d764666f9-fkkph" is "Ready"
	I1229 07:39:47.386837  870002 pod_ready.go:86] duration metric: took 7.519552768s for pod "coredns-7d764666f9-fkkph" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:47.403051  870002 pod_ready.go:83] waiting for pod "etcd-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:48.411482  870002 pod_ready.go:94] pod "etcd-pause-199444" is "Ready"
	I1229 07:39:48.411518  870002 pod_ready.go:86] duration metric: took 1.008436645s for pod "etcd-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:48.414906  870002 pod_ready.go:83] waiting for pod "kube-apiserver-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	W1229 07:39:50.422260  870002 pod_ready.go:104] pod "kube-apiserver-pause-199444" is not "Ready", error: <nil>
	I1229 07:39:50.922407  870002 pod_ready.go:94] pod "kube-apiserver-pause-199444" is "Ready"
	I1229 07:39:50.922439  870002 pod_ready.go:86] duration metric: took 2.507503678s for pod "kube-apiserver-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:50.925331  870002 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:50.931600  870002 pod_ready.go:94] pod "kube-controller-manager-pause-199444" is "Ready"
	I1229 07:39:50.931635  870002 pod_ready.go:86] duration metric: took 6.272161ms for pod "kube-controller-manager-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:50.937323  870002 pod_ready.go:83] waiting for pod "kube-proxy-hjhq6" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:50.970585  870002 pod_ready.go:94] pod "kube-proxy-hjhq6" is "Ready"
	I1229 07:39:50.970611  870002 pod_ready.go:86] duration metric: took 33.261962ms for pod "kube-proxy-hjhq6" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:47.729359  870553 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-865040:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.452862641s)
	I1229 07:39:47.729391  870553 kic.go:203] duration metric: took 4.45300304s to extract preloaded images to volume ...
	W1229 07:39:47.729530  870553 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:39:47.729643  870553 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:39:47.790516  870553 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-865040 --name missing-upgrade-865040 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-865040 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-865040 --network missing-upgrade-865040 --ip 192.168.85.2 --volume missing-upgrade-865040:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I1229 07:39:48.134943  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Running}}
	I1229 07:39:48.157017  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	I1229 07:39:48.182002  870553 cli_runner.go:164] Run: docker exec missing-upgrade-865040 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:39:48.233499  870553 oci.go:144] the created container "missing-upgrade-865040" has a running status.
	I1229 07:39:48.233528  870553 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa...
	I1229 07:39:49.474621  870553 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:39:49.497506  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	I1229 07:39:49.519750  870553 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:39:49.519773  870553 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-865040 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:39:49.569306  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	I1229 07:39:49.589615  870553 machine.go:94] provisionDockerMachine start ...
	I1229 07:39:49.589706  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:49.608025  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:49.608471  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:49.608515  870553 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:39:49.734552  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-865040
	
	I1229 07:39:49.734578  870553 ubuntu.go:182] provisioning hostname "missing-upgrade-865040"
	I1229 07:39:49.734647  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:49.752636  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:49.753094  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:49.753115  870553 main.go:144] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-865040 && echo "missing-upgrade-865040" | sudo tee /etc/hostname
	I1229 07:39:49.892972  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-865040
	
	I1229 07:39:49.893053  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:49.911117  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:49.911426  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:49.911450  870553 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-865040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-865040/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-865040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:39:50.037479  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:39:50.037506  870553 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:39:50.037535  870553 ubuntu.go:190] setting up certificates
	I1229 07:39:50.037545  870553 provision.go:84] configureAuth start
	I1229 07:39:50.037607  870553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-865040
	I1229 07:39:50.057401  870553 provision.go:143] copyHostCerts
	I1229 07:39:50.057486  870553 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:39:50.057501  870553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:39:50.057584  870553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:39:50.057693  870553 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:39:50.057704  870553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:39:50.057732  870553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:39:50.057804  870553 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:39:50.057814  870553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:39:50.057840  870553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:39:50.057903  870553 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-865040 san=[127.0.0.1 192.168.85.2 localhost minikube missing-upgrade-865040]
	I1229 07:39:50.214609  870553 provision.go:177] copyRemoteCerts
	I1229 07:39:50.214677  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:39:50.214720  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:50.231830  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:50.322475  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:39:50.349051  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1229 07:39:50.373943  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:39:50.398269  870553 provision.go:87] duration metric: took 360.700468ms to configureAuth
	I1229 07:39:50.398295  870553 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:39:50.398520  870553 config.go:182] Loaded profile config "missing-upgrade-865040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1229 07:39:50.398626  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:50.421222  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:50.421834  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:50.421857  870553 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:39:50.707488  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:39:50.707512  870553 machine.go:97] duration metric: took 1.117874625s to provisionDockerMachine
	I1229 07:39:50.707523  870553 client.go:176] duration metric: took 8.029543769s to LocalClient.Create
	I1229 07:39:50.707563  870553 start.go:167] duration metric: took 8.029630514s to libmachine.API.Create "missing-upgrade-865040"
	I1229 07:39:50.707577  870553 start.go:293] postStartSetup for "missing-upgrade-865040" (driver="docker")
	I1229 07:39:50.707588  870553 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:39:50.707674  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:39:50.707734  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:50.727659  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:50.823092  870553 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:39:50.826788  870553 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:39:50.826860  870553 main.go:144] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1229 07:39:50.826878  870553 main.go:144] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1229 07:39:50.826892  870553 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1229 07:39:50.826903  870553 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:39:50.826986  870553 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:39:50.827125  870553 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:39:50.827278  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:39:50.838326  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:39:50.867199  870553 start.go:296] duration metric: took 159.605997ms for postStartSetup
	I1229 07:39:50.867646  870553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-865040
	I1229 07:39:50.885216  870553 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/config.json ...
	I1229 07:39:50.885497  870553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:39:50.885550  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:50.903175  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:50.993615  870553 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:39:50.998177  870553 start.go:128] duration metric: took 8.323636628s to createHost
	I1229 07:39:50.998268  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:51.019223  870553 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:39:51.019256  870553 machine.go:94] provisionDockerMachine start ...
	I1229 07:39:51.019338  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:51.036306  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:51.036638  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:51.036654  870553 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:39:51.164308  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-865040
	
	I1229 07:39:51.164377  870553 ubuntu.go:182] provisioning hostname "missing-upgrade-865040"
	I1229 07:39:51.164478  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:51.183850  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:51.184151  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:51.184162  870553 main.go:144] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-865040 && echo "missing-upgrade-865040" | sudo tee /etc/hostname
	I1229 07:39:51.325143  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-865040
	
	I1229 07:39:51.325253  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:51.343812  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:51.344128  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:51.344152  870553 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-865040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-865040/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-865040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:39:51.468937  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:39:51.468963  870553 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:39:51.468983  870553 ubuntu.go:190] setting up certificates
	I1229 07:39:51.468994  870553 provision.go:84] configureAuth start
	I1229 07:39:51.469069  870553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-865040
	I1229 07:39:51.486534  870553 provision.go:143] copyHostCerts
	I1229 07:39:51.486612  870553 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:39:51.486627  870553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:39:51.486693  870553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:39:51.486790  870553 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:39:51.486800  870553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:39:51.486835  870553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:39:51.486936  870553 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:39:51.486957  870553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:39:51.486987  870553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:39:51.487052  870553 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-865040 san=[127.0.0.1 192.168.85.2 localhost minikube missing-upgrade-865040]
	I1229 07:39:51.920985  870553 provision.go:177] copyRemoteCerts
	I1229 07:39:51.921055  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:39:51.921111  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:51.943850  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:52.034958  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:39:52.060427  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1229 07:39:52.086622  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:39:52.113285  870553 provision.go:87] duration metric: took 644.270214ms to configureAuth
	I1229 07:39:52.113315  870553 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:39:52.113532  870553 config.go:182] Loaded profile config "missing-upgrade-865040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1229 07:39:52.113639  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:52.130856  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:52.131168  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:52.131189  870553 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:39:52.396162  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:39:52.396187  870553 machine.go:97] duration metric: took 1.376923117s to provisionDockerMachine
	I1229 07:39:52.396199  870553 start.go:293] postStartSetup for "missing-upgrade-865040" (driver="docker")
	I1229 07:39:52.396210  870553 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:39:52.396335  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:39:52.396395  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:52.414311  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:52.506252  870553 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:39:52.509648  870553 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:39:52.509687  870553 main.go:144] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1229 07:39:52.509697  870553 main.go:144] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1229 07:39:52.509705  870553 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1229 07:39:52.509716  870553 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:39:52.509775  870553 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:39:52.509856  870553 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:39:52.509966  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:39:52.518869  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:39:52.544295  870553 start.go:296] duration metric: took 148.079246ms for postStartSetup
	I1229 07:39:52.544393  870553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:39:52.544443  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:52.561755  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:52.649416  870553 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:39:52.654034  870553 fix.go:56] duration metric: took 31.150685323s for fixHost
	I1229 07:39:52.654061  870553 start.go:83] releasing machines lock for "missing-upgrade-865040", held for 31.150738936s
	I1229 07:39:52.654132  870553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-865040
	I1229 07:39:52.671593  870553 ssh_runner.go:195] Run: cat /version.json
	I1229 07:39:52.671633  870553 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:39:52.671643  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:52.671699  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:52.700311  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:52.701614  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	W1229 07:39:52.916025  870553 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.37.0
	I1229 07:39:52.916117  870553 ssh_runner.go:195] Run: systemctl --version
	I1229 07:39:52.922023  870553 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:39:53.080632  870553 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1229 07:39:53.086398  870553 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:39:53.113763  870553 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1229 07:39:53.113839  870553 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:39:53.151035  870553 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1229 07:39:53.151064  870553 start.go:496] detecting cgroup driver to use...
	I1229 07:39:53.151120  870553 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:39:53.151192  870553 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:39:53.178874  870553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:39:53.191638  870553 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:39:53.191730  870553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:39:53.206743  870553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:39:53.224026  870553 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:39:53.327912  870553 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:39:53.437779  870553 docker.go:234] disabling docker service ...
	I1229 07:39:53.437848  870553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:39:53.459800  870553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:39:53.473060  870553 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:39:53.566352  870553 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:39:53.669501  870553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:39:53.682277  870553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:39:53.699964  870553 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1229 07:39:53.700075  870553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.710747  870553 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:39:53.710827  870553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.721224  870553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.732678  870553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.743110  870553 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:39:53.752382  870553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.762401  870553 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.778546  870553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.788715  870553 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:39:53.797280  870553 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:39:53.806111  870553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:39:53.907472  870553 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:39:54.039351  870553 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:39:54.039468  870553 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:39:54.043420  870553 start.go:574] Will wait 60s for crictl version
	I1229 07:39:54.043519  870553 ssh_runner.go:195] Run: which crictl
	I1229 07:39:54.047252  870553 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 07:39:54.090168  870553 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1229 07:39:54.090293  870553 ssh_runner.go:195] Run: crio --version
	I1229 07:39:54.130049  870553 ssh_runner.go:195] Run: crio --version
	I1229 07:39:54.176684  870553 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	I1229 07:39:51.171633  870002 pod_ready.go:83] waiting for pod "kube-scheduler-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	W1229 07:39:53.178122  870002 pod_ready.go:104] pod "kube-scheduler-pause-199444" is not "Ready", error: <nil>
	I1229 07:39:55.178119  870002 pod_ready.go:94] pod "kube-scheduler-pause-199444" is "Ready"
	I1229 07:39:55.178142  870002 pod_ready.go:86] duration metric: took 4.00647985s for pod "kube-scheduler-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:55.178168  870002 pod_ready.go:40] duration metric: took 15.314207191s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:39:55.262731  870002 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1229 07:39:55.266556  870002 out.go:203] 
	W1229 07:39:55.269901  870002 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1229 07:39:55.273732  870002 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:39:55.276737  870002 out.go:179] * Done! kubectl is now configured to use "pause-199444" cluster and "default" namespace by default
	I1229 07:39:54.179480  870553 cli_runner.go:164] Run: docker network inspect missing-upgrade-865040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:39:54.196258  870553 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:39:54.199969  870553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:39:54.210977  870553 kubeadm.go:884] updating cluster {Name:missing-upgrade-865040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-865040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:39:54.211096  870553 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1229 07:39:54.211155  870553 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:39:54.306544  870553 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:39:54.306570  870553 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:39:54.306626  870553 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:39:54.381590  870553 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:39:54.381615  870553 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:39:54.381623  870553 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.32.0 crio true true} ...
	I1229 07:39:54.381719  870553 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=missing-upgrade-865040 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-865040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:39:54.381814  870553 ssh_runner.go:195] Run: crio config
	I1229 07:39:54.447500  870553 cni.go:84] Creating CNI manager for ""
	I1229 07:39:54.447521  870553 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:39:54.447537  870553 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:39:54.447562  870553 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-865040 NodeName:missing-upgrade-865040 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:39:54.447693  870553 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-865040"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:39:54.447766  870553 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1229 07:39:54.458373  870553 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:39:54.458480  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:39:54.468686  870553 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1229 07:39:54.488565  870553 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:39:54.508914  870553 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1229 07:39:54.529545  870553 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:39:54.533553  870553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:39:54.545660  870553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:39:54.640517  870553 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:39:54.654809  870553 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040 for IP: 192.168.85.2
	I1229 07:39:54.654834  870553 certs.go:195] generating shared ca certs ...
	I1229 07:39:54.654851  870553 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:54.654996  870553 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:39:54.655046  870553 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:39:54.655057  870553 certs.go:257] generating profile certs ...
	I1229 07:39:54.655142  870553 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/client.key
	I1229 07:39:54.655199  870553 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/apiserver.key.2348e427
	I1229 07:39:54.655241  870553 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/proxy-client.key
	I1229 07:39:54.655357  870553 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:39:54.655396  870553 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:39:54.655409  870553 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:39:54.655435  870553 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:39:54.655462  870553 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:39:54.655496  870553 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:39:54.655543  870553 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:39:54.656140  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:39:54.693169  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:39:54.719600  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:39:54.747771  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:39:54.772318  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1229 07:39:54.796972  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:39:54.825770  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:39:54.854205  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:39:54.882164  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:39:54.909575  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:39:54.953719  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:39:54.982344  870553 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:39:55.001441  870553 ssh_runner.go:195] Run: openssl version
	I1229 07:39:55.012749  870553 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:39:55.023130  870553 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:39:55.033239  870553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:39:55.037124  870553 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:39:55.037193  870553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:39:55.044883  870553 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:39:55.053719  870553 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/741854.pem /etc/ssl/certs/51391683.0
	I1229 07:39:55.062369  870553 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:39:55.071009  870553 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:39:55.080028  870553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:39:55.084035  870553 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:39:55.084104  870553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:39:55.091418  870553 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:39:55.100109  870553 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7418542.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:39:55.108891  870553 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:55.117522  870553 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:39:55.125840  870553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:55.129154  870553 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:55.129214  870553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:55.136288  870553 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:39:55.145045  870553 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:39:55.153449  870553 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:39:55.157029  870553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:39:55.164224  870553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:39:55.172593  870553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:39:55.180375  870553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:39:55.188626  870553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:39:55.196380  870553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:39:55.204421  870553 kubeadm.go:401] StartCluster: {Name:missing-upgrade-865040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-865040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:39:55.204504  870553 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:39:55.204563  870553 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:39:55.251514  870553 cri.go:96] found id: ""
	I1229 07:39:55.251586  870553 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:39:55.262783  870553 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:39:55.262805  870553 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:39:55.262859  870553 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:39:55.272205  870553 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:39:55.272910  870553 kubeconfig.go:125] found "missing-upgrade-865040" server: "https://192.168.85.2:8443"
	I1229 07:39:55.273932  870553 kapi.go:59] client config for missing-upgrade-865040: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/client.key", CAFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:39:55.274413  870553 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1229 07:39:55.274433  870553 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1229 07:39:55.274440  870553 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1229 07:39:55.274445  870553 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1229 07:39:55.274449  870553 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1229 07:39:55.274454  870553 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1229 07:39:55.276872  870553 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:39:55.288513  870553 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-29 07:39:00.880280752 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-29 07:39:54.520885384 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1229 07:39:55.288584  870553 kubeadm.go:1161] stopping kube-system containers ...
	I1229 07:39:55.288600  870553 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1229 07:39:55.288661  870553 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:39:55.378022  870553 cri.go:96] found id: ""
	I1229 07:39:55.378138  870553 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1229 07:39:55.403860  870553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:39:55.417546  870553 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:39:55.417576  870553 kubeadm.go:158] found existing configuration files:
	
	I1229 07:39:55.417630  870553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:39:55.427565  870553 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:39:55.427637  870553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:39:55.436754  870553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:39:55.448374  870553 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:39:55.448462  870553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:39:55.457854  870553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:39:55.468956  870553 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:39:55.469031  870553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:39:55.479784  870553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:39:55.493510  870553 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:39:55.493581  870553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:39:55.503642  870553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:39:55.514599  870553 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:39:55.584754  870553 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	
	
	==> CRI-O <==
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.333341454Z" level=info msg="Created container 15086f7244d65b56329814193b4549ee1080b3bc8810f90f2dd6db1f43fe3ac3: kube-system/kube-proxy-hjhq6/kube-proxy" id=f6f7bc62-5e0c-43e0-902e-2c2ff713ca68 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.337936706Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.338627758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.339169344Z" level=info msg="Starting container: d8fb8a5d7e7c982237067aa6ca2784f4526f1e7902940d253249fe6da49f3c99" id=e31ed663-3616-4b95-ae2b-297baceec57d name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.347879481Z" level=info msg="Starting container: 15086f7244d65b56329814193b4549ee1080b3bc8810f90f2dd6db1f43fe3ac3" id=2277c8ea-4276-4d66-a5bf-2dedb2f28fa6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.349081504Z" level=info msg="Created container 03d69afbac915ecaa7aa23324b9dc7e6d51c6931c9f2072498fd818d409f7658: kube-system/coredns-7d764666f9-fkkph/coredns" id=dc780e17-08c1-44d6-8aff-d9daf394ec43 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.349444684Z" level=info msg="Started container" PID=2405 containerID=d8fb8a5d7e7c982237067aa6ca2784f4526f1e7902940d253249fe6da49f3c99 description=kube-system/kindnet-7kqm7/kindnet-cni id=e31ed663-3616-4b95-ae2b-297baceec57d name=/runtime.v1.RuntimeService/StartContainer sandboxID=9040c0a1cf0b65b1ce27cf7a85bf3a80b5ada57a7c2e2ed47e4afbd7b7f6ccda
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.353925555Z" level=info msg="Started container" PID=2384 containerID=15086f7244d65b56329814193b4549ee1080b3bc8810f90f2dd6db1f43fe3ac3 description=kube-system/kube-proxy-hjhq6/kube-proxy id=2277c8ea-4276-4d66-a5bf-2dedb2f28fa6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b04efd932b4bd0a3bd6d83a0452fadcfb2117daf081dad4fa81374db796f18d
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.354266974Z" level=info msg="Starting container: 03d69afbac915ecaa7aa23324b9dc7e6d51c6931c9f2072498fd818d409f7658" id=19e0db99-16c0-4f0c-8843-3fada19b2589 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.370346482Z" level=info msg="Started container" PID=2406 containerID=03d69afbac915ecaa7aa23324b9dc7e6d51c6931c9f2072498fd818d409f7658 description=kube-system/coredns-7d764666f9-fkkph/coredns id=19e0db99-16c0-4f0c-8843-3fada19b2589 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a29405ea67b5e08d4be88f3e1a5fecd172741bc10625896d96d6404fcd5b027
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.374893241Z" level=info msg="Created container 3576f4eae5c506a21c79802f263df1c18c2d4c377563be9a284fc12ec7165272: kube-system/kube-scheduler-pause-199444/kube-scheduler" id=912e6b06-2be1-41dd-b88d-c483468d7a45 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.377733134Z" level=info msg="Starting container: 3576f4eae5c506a21c79802f263df1c18c2d4c377563be9a284fc12ec7165272" id=79b18b5a-bb67-4ad0-9156-c0e44a743b25 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.386268073Z" level=info msg="Started container" PID=2414 containerID=3576f4eae5c506a21c79802f263df1c18c2d4c377563be9a284fc12ec7165272 description=kube-system/kube-scheduler-pause-199444/kube-scheduler id=79b18b5a-bb67-4ad0-9156-c0e44a743b25 name=/runtime.v1.RuntimeService/StartContainer sandboxID=749afe2ac835f477a5e4c942d7bd486d01f64dbee5af7ccc54e6dfc2d6818d1b
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.403930812Z" level=info msg="Created container e60680b8c59f9f0cdc28062e8f0100473d54a7f8e0dd3498da8e0e485e2b1af3: kube-system/kube-controller-manager-pause-199444/kube-controller-manager" id=02348c48-39bd-444b-a73e-d53cab839b6e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.404743982Z" level=info msg="Starting container: e60680b8c59f9f0cdc28062e8f0100473d54a7f8e0dd3498da8e0e485e2b1af3" id=93978e4e-59eb-4073-a3f5-32f90ab1addb name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.407030457Z" level=info msg="Started container" PID=2435 containerID=e60680b8c59f9f0cdc28062e8f0100473d54a7f8e0dd3498da8e0e485e2b1af3 description=kube-system/kube-controller-manager-pause-199444/kube-controller-manager id=93978e4e-59eb-4073-a3f5-32f90ab1addb name=/runtime.v1.RuntimeService/StartContainer sandboxID=436173e1b5447055745b061f40ca19be5b4b572c8a6175c0d127357a043f97dc
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.680076224Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.68018857Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.689344661Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.68951798Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.700699833Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.700738684Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.700773163Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.706947993Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.707109374Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	e60680b8c59f9       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     23 seconds ago       Running             kube-controller-manager   1                   436173e1b5447       kube-controller-manager-pause-199444   kube-system
	3576f4eae5c50       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     23 seconds ago       Running             kube-scheduler            1                   749afe2ac835f       kube-scheduler-pause-199444            kube-system
	d8fb8a5d7e7c9       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     23 seconds ago       Running             kindnet-cni               1                   9040c0a1cf0b6       kindnet-7kqm7                          kube-system
	03d69afbac915       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     23 seconds ago       Running             coredns                   1                   6a29405ea67b5       coredns-7d764666f9-fkkph               kube-system
	14a2cd392b80e       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     23 seconds ago       Running             kube-apiserver            1                   3e165d0448270       kube-apiserver-pause-199444            kube-system
	15086f7244d65       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     23 seconds ago       Running             kube-proxy                1                   9b04efd932b4b       kube-proxy-hjhq6                       kube-system
	2bd65abc5d48c       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     23 seconds ago       Running             etcd                      1                   e137deeda405a       etcd-pause-199444                      kube-system
	de742ed40b650       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     45 seconds ago       Exited              coredns                   0                   6a29405ea67b5       coredns-7d764666f9-fkkph               kube-system
	1044538f90cb7       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   57 seconds ago       Exited              kindnet-cni               0                   9040c0a1cf0b6       kindnet-7kqm7                          kube-system
	c4bead8515e4b       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     About a minute ago   Exited              kube-proxy                0                   9b04efd932b4b       kube-proxy-hjhq6                       kube-system
	52c5419dcaef9       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     About a minute ago   Exited              etcd                      0                   e137deeda405a       etcd-pause-199444                      kube-system
	4d3b2913ce4c3       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     About a minute ago   Exited              kube-controller-manager   0                   436173e1b5447       kube-controller-manager-pause-199444   kube-system
	f726d99564778       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     About a minute ago   Exited              kube-apiserver            0                   3e165d0448270       kube-apiserver-pause-199444            kube-system
	93170bc14e080       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     About a minute ago   Exited              kube-scheduler            0                   749afe2ac835f       kube-scheduler-pause-199444            kube-system
	
	
	==> coredns [03d69afbac915ecaa7aa23324b9dc7e6d51c6931c9f2072498fd818d409f7658] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:52667 - 26930 "HINFO IN 8759021534572539469.6846005689263914927. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030762234s
	
	
	==> coredns [de742ed40b6505777453ab7944380979c7b0c8ba97e9f0a3357a7a6974c325a0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:50911 - 40462 "HINFO IN 2006846187992352177.8836940632566329301. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007964266s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-199444
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-199444
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=pause-199444
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_38_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:38:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-199444
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:39:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:39:54 +0000   Mon, 29 Dec 2025 07:38:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:39:54 +0000   Mon, 29 Dec 2025 07:38:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:39:54 +0000   Mon, 29 Dec 2025 07:38:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:39:54 +0000   Mon, 29 Dec 2025 07:39:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-199444
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                8bf55811-e14d-4a6d-85ea-f081bde819fb
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-fkkph                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     61s
	  kube-system                 etcd-pause-199444                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         66s
	  kube-system                 kindnet-7kqm7                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      62s
	  kube-system                 kube-apiserver-pause-199444             250m (12%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-controller-manager-pause-199444    200m (10%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-proxy-hjhq6                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-scheduler-pause-199444             100m (5%)     0 (0%)      0 (0%)           0 (0%)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  63s   node-controller  Node pause-199444 event: Registered Node pause-199444 in Controller
	  Normal  RegisteredNode  18s   node-controller  Node pause-199444 event: Registered Node pause-199444 in Controller
	
	
	==> dmesg <==
	[Dec29 07:18] overlayfs: idmapped layers are currently not supported
	[Dec29 07:19] overlayfs: idmapped layers are currently not supported
	[Dec29 07:20] overlayfs: idmapped layers are currently not supported
	[Dec29 07:21] overlayfs: idmapped layers are currently not supported
	[  +3.417224] overlayfs: idmapped layers are currently not supported
	[Dec29 07:22] overlayfs: idmapped layers are currently not supported
	[ +38.497546] overlayfs: idmapped layers are currently not supported
	[Dec29 07:23] overlayfs: idmapped layers are currently not supported
	[  +3.471993] overlayfs: idmapped layers are currently not supported
	[Dec29 07:24] overlayfs: idmapped layers are currently not supported
	[Dec29 07:25] overlayfs: idmapped layers are currently not supported
	[Dec29 07:26] overlayfs: idmapped layers are currently not supported
	[Dec29 07:29] overlayfs: idmapped layers are currently not supported
	[Dec29 07:30] overlayfs: idmapped layers are currently not supported
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2bd65abc5d48c1e3a0fe601d1b60601ced20d3e6a4564fa69c349bf8a0a1e1eb] <==
	{"level":"info","ts":"2025-12-29T07:39:35.334609Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:39:35.334670Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:39:35.336280Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-29T07:39:35.336527Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:39:35.336548Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:39:35.336620Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-29T07:39:35.336627Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-29T07:39:35.912838Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:39:35.912949Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:39:35.916847Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-29T07:39:35.916922Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:39:35.916967Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:39:35.920854Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-29T07:39:35.920929Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:39:35.920985Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:39:35.921021Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-29T07:39:35.924983Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-199444 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:39:35.925187Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:39:35.926143Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:39:35.933981Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:39:35.940842Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:39:35.940890Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:39:35.941336Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:39:35.942004Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:39:35.945538Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> etcd [52c5419dcaef952ed0fbcdfc375a4dbfb6df0de2bfa13b1ab22c2eea4c5a2f67] <==
	{"level":"info","ts":"2025-12-29T07:38:46.330130Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-29T07:38:46.345603Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:38:46.346529Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:38:46.361762Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:38:46.380047Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:38:46.395050Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T07:38:46.395172Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-29T07:39:18.107609Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-29T07:39:18.107659Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-199444","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-29T07:39:18.107789Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-29T07:39:18.399731Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-29T07:39:18.401318Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-29T07:39:18.401364Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-29T07:39:18.401451Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-29T07:39:18.401554Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-29T07:39:18.401595Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-29T07:39:18.401455Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-29T07:39:18.401684Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-29T07:39:18.401481Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-29T07:39:18.401808Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-29T07:39:18.401824Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-29T07:39:18.405268Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-29T07:39:18.405386Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-29T07:39:18.405419Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-29T07:39:18.405429Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-199444","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 07:39:59 up  4:22,  0 user,  load average: 2.60, 2.21, 2.42
	Linux pause-199444 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1044538f90cb727baf9a757bac77301f44852ad91704ddc8ca63cd3258db41f7] <==
	I1229 07:39:02.161113       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:39:02.161694       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1229 07:39:02.161869       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:39:02.161944       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:39:02.161961       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:39:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:39:02.358182       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:39:02.358209       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:39:02.358218       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:39:02.359722       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:39:02.558293       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:39:02.558325       1 metrics.go:72] Registering metrics
	I1229 07:39:02.558383       1 controller.go:711] "Syncing nftables rules"
	I1229 07:39:12.358036       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 07:39:12.358093       1 main.go:301] handling current node
	
	
	==> kindnet [d8fb8a5d7e7c982237067aa6ca2784f4526f1e7902940d253249fe6da49f3c99] <==
	I1229 07:39:35.462856       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:39:35.463806       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1229 07:39:35.465004       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:39:35.465063       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:39:35.465101       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:39:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:39:35.669282       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:39:35.669350       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:39:35.678035       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:39:35.679301       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1229 07:39:38.456932       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1229 07:39:38.457185       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1229 07:39:38.457135       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1229 07:39:38.463214       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1229 07:39:40.079285       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:39:40.079370       1 metrics.go:72] Registering metrics
	I1229 07:39:40.079481       1 controller.go:711] "Syncing nftables rules"
	I1229 07:39:45.670319       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 07:39:45.670467       1 main.go:301] handling current node
	I1229 07:39:55.669577       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 07:39:55.669611       1 main.go:301] handling current node
	
	
	==> kube-apiserver [14a2cd392b80e34aa9627f104907fb18c7d7c86a7252093b1c9179bc59c7cc58] <==
	I1229 07:39:38.504382       1 policy_source.go:248] refreshing policies
	I1229 07:39:38.504604       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:39:38.504660       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:39:38.504772       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:38.506167       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1229 07:39:38.506450       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:39:38.511743       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:38.511970       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:38.512863       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:39:38.512917       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:39:38.513007       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:39:38.513051       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:39:38.513230       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:39:38.513340       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:39:38.513806       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:39:38.514196       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1229 07:39:38.514242       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1229 07:39:38.516124       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1229 07:39:38.522713       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:39:39.188249       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:39:40.347726       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:39:41.810102       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:39:41.861184       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:39:41.913563       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:39:42.020945       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [f726d99564778479db4e619c295506ce8718cf7a9e044557c636e62a9921968b] <==
	W1229 07:39:18.127796       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.127854       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.127900       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128008       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128086       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128185       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128274       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128371       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128465       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128557       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128653       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128906       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.132051       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.133038       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.133576       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.133714       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.133847       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.133970       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.134137       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.134577       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.127914       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.135299       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.135396       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.127942       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1229 07:39:18.138917       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [4d3b2913ce4c31183bfab38b2d955071916f1f9ccdab93f542f4bb6134e9028a] <==
	I1229 07:38:56.952586       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.952593       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.952600       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.952605       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.952613       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.952624       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.952631       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.970828       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.971062       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.971275       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.971476       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.971637       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.971707       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.973529       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.973631       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.974326       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.990894       1 range_allocator.go:433] "Set node PodCIDR" node="pause-199444" podCIDRs=["10.244.0.0/24"]
	I1229 07:38:56.991025       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:57.020690       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:38:57.072477       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:57.154272       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:57.154296       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:38:57.154301       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:38:57.221776       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:16.949471       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [e60680b8c59f9f0cdc28062e8f0100473d54a7f8e0dd3498da8e0e485e2b1af3] <==
	I1229 07:39:41.540229       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540241       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540249       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540255       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540270       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540277       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540286       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540299       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.546306       1 range_allocator.go:177] "Sending events to api server"
	I1229 07:39:41.546352       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1229 07:39:41.546381       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:39:41.546411       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540305       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540313       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540403       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540411       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540535       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540545       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540552       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540558       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.560879       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.625824       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.639094       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.639121       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:39:41.639128       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [15086f7244d65b56329814193b4549ee1080b3bc8810f90f2dd6db1f43fe3ac3] <==
	I1229 07:39:36.349139       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:39:36.442664       1 shared_informer.go:370] "Waiting for caches to sync"
	E1229 07:39:38.493072       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-199444\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	I1229 07:39:40.043912       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:40.044054       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1229 07:39:40.044182       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:39:40.064356       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:39:40.064505       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:39:40.072143       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:39:40.072458       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:39:40.072482       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:39:40.073688       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:39:40.073707       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:39:40.073988       1 config.go:200] "Starting service config controller"
	I1229 07:39:40.074005       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:39:40.074560       1 config.go:309] "Starting node config controller"
	I1229 07:39:40.074617       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:39:40.074647       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:39:40.074905       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:39:40.074923       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:39:40.174666       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:39:40.174708       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:39:40.174994       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c4bead8515e4b5fe06cb4b30d6662a72f285653c014d4fa9ddad118a4a3bd866] <==
	I1229 07:38:58.691960       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:38:58.796684       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:38:58.897705       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:58.897741       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1229 07:38:58.897808       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:38:59.159416       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:38:59.159471       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:38:59.167002       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:38:59.167282       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:38:59.167297       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:38:59.168698       1 config.go:200] "Starting service config controller"
	I1229 07:38:59.168708       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:38:59.168724       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:38:59.168727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:38:59.168746       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:38:59.168750       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:38:59.189014       1 config.go:309] "Starting node config controller"
	I1229 07:38:59.189033       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:38:59.189049       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:38:59.269297       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:38:59.269335       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:38:59.269367       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3576f4eae5c506a21c79802f263df1c18c2d4c377563be9a284fc12ec7165272] <==
	I1229 07:39:38.437319       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:39:38.437413       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:39:38.439686       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:39:38.440948       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:39:38.441023       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:39:38.441064       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1229 07:39:38.481442       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:39:38.481738       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:39:38.481842       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:39:38.481935       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:39:38.482029       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:39:38.482117       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:39:38.482229       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:39:38.482317       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:39:38.482410       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:39:38.482530       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:39:38.482625       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:39:38.482735       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:39:38.482843       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:39:38.484092       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:39:38.484212       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:39:38.484302       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:39:38.484456       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:39:38.484562       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	I1229 07:39:38.541146       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [93170bc14e08075a82aa9b1cabcc864a8794415ed924f510a56c2eba299de2b9] <==
	E1229 07:38:50.181516       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:38:50.224018       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:38:50.278892       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1229 07:38:50.338707       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:38:50.478721       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:38:50.479582       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:38:50.481738       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:38:50.510240       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:38:50.545641       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:38:50.553302       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:38:50.559157       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:38:50.623628       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:38:50.649449       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:38:50.697839       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:38:50.744663       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:38:50.764211       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:38:50.845290       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:38:50.950511       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	I1229 07:38:53.049459       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:18.093905       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1229 07:39:18.093936       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1229 07:39:18.093954       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1229 07:39:18.093981       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:39:18.094115       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1229 07:39:18.094130       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.282331    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-199444" containerName="etcd"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.286904    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-199444\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="86d281f7085150a3161efc5f782fdcf0" pod="kube-system/kube-apiserver-pause-199444"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.287878    1299 reflector.go:204] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-199444\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.306605    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-199444\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="6a38558238ecd1074e40bc4051c40351" pod="kube-system/kube-controller-manager-pause-199444"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.315827    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-7kqm7\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="b26d894d-b2f4-43a4-ab75-91e784b0888a" pod="kube-system/kindnet-7kqm7"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.345684    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-hjhq6\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="878aafee-6668-4ad2-8c4c-dafd29e52775" pod="kube-system/kube-proxy-hjhq6"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.368689    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-fkkph\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="ff98f4b2-06a7-4b08-b45e-ddaf75cc6ebd" pod="kube-system/coredns-7d764666f9-fkkph"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.380518    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-199444\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="cde101d16277a96d4afdbe3e8f9dbe1f" pod="kube-system/etcd-pause-199444"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.387289    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-199444\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="0a5b0e807aadce20784c5fa28ad32717" pod="kube-system/kube-scheduler-pause-199444"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.393023    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-199444\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="cde101d16277a96d4afdbe3e8f9dbe1f" pod="kube-system/etcd-pause-199444"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.394449    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-199444\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="0a5b0e807aadce20784c5fa28ad32717" pod="kube-system/kube-scheduler-pause-199444"
	Dec 29 07:39:40 pause-199444 kubelet[1299]: E1229 07:39:40.805158    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-199444" containerName="kube-apiserver"
	Dec 29 07:39:43 pause-199444 kubelet[1299]: W1229 07:39:43.148692    1299 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 29 07:39:44 pause-199444 kubelet[1299]: E1229 07:39:44.917621    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-199444" containerName="kube-scheduler"
	Dec 29 07:39:47 pause-199444 kubelet[1299]: E1229 07:39:47.283065    1299 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-fkkph" containerName="coredns"
	Dec 29 07:39:47 pause-199444 kubelet[1299]: E1229 07:39:47.656820    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-199444" containerName="kube-controller-manager"
	Dec 29 07:39:48 pause-199444 kubelet[1299]: E1229 07:39:48.268637    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-199444" containerName="etcd"
	Dec 29 07:39:48 pause-199444 kubelet[1299]: E1229 07:39:48.310259    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-199444" containerName="etcd"
	Dec 29 07:39:50 pause-199444 kubelet[1299]: E1229 07:39:50.815797    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-199444" containerName="kube-apiserver"
	Dec 29 07:39:51 pause-199444 kubelet[1299]: E1229 07:39:51.318147    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-199444" containerName="kube-apiserver"
	Dec 29 07:39:54 pause-199444 kubelet[1299]: E1229 07:39:54.928208    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-199444" containerName="kube-scheduler"
	Dec 29 07:39:55 pause-199444 kubelet[1299]: E1229 07:39:55.326696    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-199444" containerName="kube-scheduler"
	Dec 29 07:39:55 pause-199444 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:39:55 pause-199444 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:39:55 pause-199444 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-199444 -n pause-199444
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-199444 -n pause-199444: exit status 2 (558.878612ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-199444 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-199444
helpers_test.go:244: (dbg) docker inspect pause-199444:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348",
	        "Created": "2025-12-29T07:38:27.961757823Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 865835,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:38:28.963264045Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348/hosts",
	        "LogPath": "/var/lib/docker/containers/6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348/6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348-json.log",
	        "Name": "/pause-199444",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-199444:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-199444",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c18e532cae34fead4f1d959f89ca1e220ced16bead89c910d07966aa0f0b348",
	                "LowerDir": "/var/lib/docker/overlay2/ad52fe2cdfa7462b46351ada8243f608a908966fa9ef5c46c4d267859b46520d-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad52fe2cdfa7462b46351ada8243f608a908966fa9ef5c46c4d267859b46520d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad52fe2cdfa7462b46351ada8243f608a908966fa9ef5c46c4d267859b46520d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad52fe2cdfa7462b46351ada8243f608a908966fa9ef5c46c4d267859b46520d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-199444",
	                "Source": "/var/lib/docker/volumes/pause-199444/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-199444",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-199444",
	                "name.minikube.sigs.k8s.io": "pause-199444",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8d502a99e58f7a67d3458820214d19b2a6887cebd2173f56593c8b7aa875522b",
	            "SandboxKey": "/var/run/docker/netns/8d502a99e58f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33728"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33729"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33732"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33730"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33731"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-199444": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:67:10:1f:e9:67",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7c9898f8f3702d07027fc90360666319a7e172f32a08540f329560eafd37e558",
	                    "EndpointID": "5b9d70025df59fd0a52ef924ef39e4d68d9ef6361bb94d936bdefac399382f08",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-199444",
	                        "6c18e532cae3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-199444 -n pause-199444
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-199444 -n pause-199444: exit status 2 (561.057212ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-199444 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-199444 logs -n 25: (2.144290918s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-272744                                                                                         │ multinode-272744            │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │                     │
	│ start   │ -p multinode-272744-m02 --driver=docker  --container-runtime=crio                                                │ multinode-272744-m02        │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │                     │
	│ start   │ -p multinode-272744-m03 --driver=docker  --container-runtime=crio                                                │ multinode-272744-m03        │ jenkins │ v1.37.0 │ 29 Dec 25 07:35 UTC │ 29 Dec 25 07:36 UTC │
	│ node    │ add -p multinode-272744                                                                                          │ multinode-272744            │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ delete  │ -p multinode-272744-m03                                                                                          │ multinode-272744-m03        │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
	│ delete  │ -p multinode-272744                                                                                              │ multinode-272744            │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
	│ start   │ -p scheduled-stop-301780 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
	│ stop    │ -p scheduled-stop-301780 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --cancel-scheduled                                                                      │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:36 UTC │ 29 Dec 25 07:36 UTC │
	│ stop    │ -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │                     │
	│ stop    │ -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:37 UTC │ 29 Dec 25 07:37 UTC │
	│ delete  │ -p scheduled-stop-301780                                                                                         │ scheduled-stop-301780       │ jenkins │ v1.37.0 │ 29 Dec 25 07:38 UTC │ 29 Dec 25 07:38 UTC │
	│ start   │ -p insufficient-storage-190289 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-190289 │ jenkins │ v1.37.0 │ 29 Dec 25 07:38 UTC │                     │
	│ delete  │ -p insufficient-storage-190289                                                                                   │ insufficient-storage-190289 │ jenkins │ v1.37.0 │ 29 Dec 25 07:38 UTC │ 29 Dec 25 07:38 UTC │
	│ start   │ -p pause-199444 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-199444                │ jenkins │ v1.37.0 │ 29 Dec 25 07:38 UTC │ 29 Dec 25 07:39 UTC │
	│ start   │ -p missing-upgrade-865040 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-865040      │ jenkins │ v1.35.0 │ 29 Dec 25 07:38 UTC │ 29 Dec 25 07:39 UTC │
	│ start   │ -p pause-199444 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-199444                │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │ 29 Dec 25 07:39 UTC │
	│ start   │ -p missing-upgrade-865040 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio         │ missing-upgrade-865040      │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │                     │
	│ pause   │ -p pause-199444 --alsologtostderr -v=5                                                                           │ pause-199444                │ jenkins │ v1.37.0 │ 29 Dec 25 07:39 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:39:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:39:21.285263  870553 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:39:21.285377  870553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:39:21.285389  870553 out.go:374] Setting ErrFile to fd 2...
	I1229 07:39:21.285395  870553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:39:21.285660  870553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:39:21.286029  870553 out.go:368] Setting JSON to false
	I1229 07:39:21.286901  870553 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15713,"bootTime":1766978248,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:39:21.286967  870553 start.go:143] virtualization:  
	I1229 07:39:21.291815  870553 out.go:179] * [missing-upgrade-865040] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:39:21.295509  870553 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:39:21.295601  870553 notify.go:221] Checking for updates...
	I1229 07:39:21.301310  870553 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:39:21.304274  870553 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:39:21.307667  870553 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:39:21.310449  870553 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:39:21.313296  870553 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:39:21.316676  870553 config.go:182] Loaded profile config "missing-upgrade-865040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1229 07:39:21.320229  870553 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1229 07:39:21.323051  870553 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:39:21.345918  870553 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:39:21.346041  870553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:39:21.405563  870553 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:39:21.396252553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:39:21.405668  870553 docker.go:319] overlay module found
	I1229 07:39:21.408707  870553 out.go:179] * Using the docker driver based on existing profile
	I1229 07:39:21.411569  870553 start.go:309] selected driver: docker
	I1229 07:39:21.411588  870553 start.go:928] validating driver "docker" against &{Name:missing-upgrade-865040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-865040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:39:21.411696  870553 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:39:21.412434  870553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:39:21.468326  870553 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:39:21.45961504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:39:21.468630  870553 cni.go:84] Creating CNI manager for ""
	I1229 07:39:21.468708  870553 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:39:21.468755  870553 start.go:353] cluster config:
	{Name:missing-upgrade-865040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-865040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:39:21.472091  870553 out.go:179] * Starting "missing-upgrade-865040" primary control-plane node in "missing-upgrade-865040" cluster
	I1229 07:39:21.474912  870553 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:39:21.479735  870553 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:39:21.482563  870553 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1229 07:39:21.482608  870553 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:39:21.482631  870553 cache.go:65] Caching tarball of preloaded images
	I1229 07:39:21.482667  870553 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1229 07:39:21.482733  870553 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:39:21.482744  870553 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1229 07:39:21.482841  870553 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/config.json ...
	I1229 07:39:21.503174  870553 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1229 07:39:21.503199  870553 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1229 07:39:21.503216  870553 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:39:21.503248  870553 start.go:360] acquireMachinesLock for missing-upgrade-865040: {Name:mka6fa0178c2cd1d84f888ae65ca310ceb8ae124 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:39:21.503311  870553 start.go:364] duration metric: took 40.214µs to acquireMachinesLock for "missing-upgrade-865040"
	I1229 07:39:21.503332  870553 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:39:21.503340  870553 fix.go:54] fixHost starting: 
	I1229 07:39:21.503624  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:21.520838  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:21.520920  870553 fix.go:112] recreateIfNeeded on missing-upgrade-865040: state= err=unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:21.520981  870553 fix.go:117] machineExists: false. err=machine does not exist
	I1229 07:39:21.524370  870553 out.go:179] * docker "missing-upgrade-865040" container is missing, will recreate.
	I1229 07:39:23.317349  870002 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:39:23.317383  870002 machine.go:97] duration metric: took 6.871154537s to provisionDockerMachine
	I1229 07:39:23.317395  870002 start.go:293] postStartSetup for "pause-199444" (driver="docker")
	I1229 07:39:23.317411  870002 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:39:23.317492  870002 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:39:23.317540  870002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-199444
	I1229 07:39:23.338302  870002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/pause-199444/id_rsa Username:docker}
	I1229 07:39:23.445131  870002 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:39:23.448885  870002 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:39:23.448918  870002 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:39:23.448930  870002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:39:23.449013  870002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:39:23.449149  870002 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:39:23.449295  870002 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:39:23.456874  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:39:23.473659  870002 start.go:296] duration metric: took 156.247989ms for postStartSetup
	I1229 07:39:23.473750  870002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:39:23.473812  870002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-199444
	I1229 07:39:23.490418  870002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/pause-199444/id_rsa Username:docker}
	I1229 07:39:23.595154  870002 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:39:23.600302  870002 fix.go:56] duration metric: took 7.183200804s for fixHost
	I1229 07:39:23.600339  870002 start.go:83] releasing machines lock for "pause-199444", held for 7.18326442s
	I1229 07:39:23.600439  870002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-199444
	I1229 07:39:23.618998  870002 ssh_runner.go:195] Run: cat /version.json
	I1229 07:39:23.619057  870002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-199444
	I1229 07:39:23.619344  870002 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:39:23.619406  870002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-199444
	I1229 07:39:23.654216  870002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/pause-199444/id_rsa Username:docker}
	I1229 07:39:23.662523  870002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/pause-199444/id_rsa Username:docker}
	I1229 07:39:23.868635  870002 ssh_runner.go:195] Run: systemctl --version
	I1229 07:39:23.875281  870002 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:39:23.934339  870002 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:39:23.939097  870002 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:39:23.939185  870002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:39:23.948507  870002 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:39:23.948533  870002 start.go:496] detecting cgroup driver to use...
	I1229 07:39:23.948566  870002 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:39:23.948613  870002 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:39:23.964582  870002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:39:23.978282  870002 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:39:23.978343  870002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:39:23.993997  870002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:39:24.009620  870002 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:39:24.150793  870002 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:39:24.275294  870002 docker.go:234] disabling docker service ...
	I1229 07:39:24.275403  870002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:39:24.290576  870002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:39:24.303547  870002 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:39:24.440012  870002 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:39:24.576980  870002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:39:24.590702  870002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:39:24.604295  870002 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:39:24.604392  870002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.613246  870002 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:39:24.613323  870002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.622697  870002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.631653  870002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.640363  870002 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:39:24.648659  870002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.657483  870002 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.665864  870002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:24.674790  870002 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:39:24.682090  870002 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:39:24.689650  870002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:39:24.814690  870002 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:39:21.527339  870553 delete.go:124] DEMOLISHING missing-upgrade-865040 ...
	I1229 07:39:21.527473  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:21.542678  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	W1229 07:39:21.542742  870553 stop.go:83] unable to get state: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:21.542765  870553 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:21.543231  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:21.559157  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:21.559225  870553 delete.go:82] Unable to get host status for missing-upgrade-865040, assuming it has already been deleted: state: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:21.559295  870553 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-865040
	W1229 07:39:21.573661  870553 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-865040 returned with exit code 1
	I1229 07:39:21.573718  870553 kic.go:371] could not find the container missing-upgrade-865040 to remove it. will try anyways
	I1229 07:39:21.573779  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:21.588521  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	W1229 07:39:21.588587  870553 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:21.588662  870553 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-865040 /bin/bash -c "sudo init 0"
	W1229 07:39:21.604078  870553 cli_runner.go:211] docker exec --privileged -t missing-upgrade-865040 /bin/bash -c "sudo init 0" returned with exit code 1
	I1229 07:39:21.604113  870553 oci.go:659] error shutdown missing-upgrade-865040: docker exec --privileged -t missing-upgrade-865040 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:22.604285  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:22.621379  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:22.621444  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:22.621458  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:22.621500  870553 retry.go:84] will retry after 700ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:23.286007  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:23.305978  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:23.306038  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:23.306048  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:23.705979  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:23.725443  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:23.725529  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:23.725545  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:25.182214  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:25.199370  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:25.199434  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:25.199449  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:26.162961  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:26.179222  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:26.179284  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:26.179309  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:29.842308  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:29.859429  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:29.859514  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:29.859524  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:29.859564  870553 retry.go:84] will retry after 3.7s: couldn't verify container is exited. %v: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:33.338732  870002 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.524006273s)
	I1229 07:39:33.338758  870002 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:39:33.338808  870002 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:39:33.342695  870002 start.go:574] Will wait 60s for crictl version
	I1229 07:39:33.342767  870002 ssh_runner.go:195] Run: which crictl
	I1229 07:39:33.346186  870002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:39:33.374244  870002 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:39:33.374336  870002 ssh_runner.go:195] Run: crio --version
	I1229 07:39:33.401553  870002 ssh_runner.go:195] Run: crio --version
	I1229 07:39:33.434965  870002 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:39:33.437982  870002 cli_runner.go:164] Run: docker network inspect pause-199444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:39:33.454103  870002 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1229 07:39:33.458108  870002 kubeadm.go:884] updating cluster {Name:pause-199444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-199444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:39:33.458243  870002 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:39:33.458332  870002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:39:33.493350  870002 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:39:33.493373  870002 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:39:33.493424  870002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:39:33.520839  870002 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:39:33.520867  870002 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:39:33.520875  870002 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1229 07:39:33.520969  870002 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-199444 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-199444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:39:33.521052  870002 ssh_runner.go:195] Run: crio config
	I1229 07:39:33.600745  870002 cni.go:84] Creating CNI manager for ""
	I1229 07:39:33.600768  870002 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:39:33.600831  870002 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:39:33.600864  870002 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-199444 NodeName:pause-199444 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:39:33.601040  870002 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-199444"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:39:33.601139  870002 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:39:33.610505  870002 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:39:33.610600  870002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:39:33.618485  870002 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1229 07:39:33.632019  870002 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:39:33.645279  870002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1229 07:39:33.659366  870002 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:39:33.663359  870002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:39:33.792849  870002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:39:33.806203  870002 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444 for IP: 192.168.76.2
	I1229 07:39:33.806268  870002 certs.go:195] generating shared ca certs ...
	I1229 07:39:33.806299  870002 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:33.806465  870002 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:39:33.806544  870002 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:39:33.806576  870002 certs.go:257] generating profile certs ...
	I1229 07:39:33.806692  870002 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/client.key
	I1229 07:39:33.806789  870002 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/apiserver.key.e4e09964
	I1229 07:39:33.806853  870002 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/proxy-client.key
	I1229 07:39:33.807002  870002 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:39:33.807064  870002 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:39:33.807098  870002 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:39:33.807148  870002 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:39:33.807207  870002 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:39:33.807256  870002 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:39:33.807333  870002 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:39:33.808222  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:39:33.832887  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:39:33.851200  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:39:33.870542  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:39:33.889475  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1229 07:39:33.908027  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1229 07:39:33.926155  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:39:33.944368  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:39:33.962886  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:39:33.981645  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:39:33.999208  870002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:39:34.020533  870002 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:39:34.036405  870002 ssh_runner.go:195] Run: openssl version
	I1229 07:39:34.045032  870002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:39:34.053973  870002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:39:34.063212  870002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:39:34.067446  870002 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:39:34.067616  870002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:39:34.109684  870002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:39:34.117446  870002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:34.125346  870002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:39:34.133014  870002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:34.137250  870002 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:34.137362  870002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:34.180066  870002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:39:34.187778  870002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:39:34.195001  870002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:39:34.202575  870002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:39:34.206455  870002 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:39:34.206531  870002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:39:34.247913  870002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:39:34.255552  870002 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:39:34.259430  870002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:39:34.300832  870002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:39:34.341959  870002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:39:34.383019  870002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:39:34.426262  870002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:39:34.467728  870002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:39:34.511168  870002 kubeadm.go:401] StartCluster: {Name:pause-199444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-199444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:39:34.511292  870002 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:39:34.511402  870002 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:39:34.541481  870002 cri.go:96] found id: "de742ed40b6505777453ab7944380979c7b0c8ba97e9f0a3357a7a6974c325a0"
	I1229 07:39:34.541556  870002 cri.go:96] found id: "1044538f90cb727baf9a757bac77301f44852ad91704ddc8ca63cd3258db41f7"
	I1229 07:39:34.541579  870002 cri.go:96] found id: "c4bead8515e4b5fe06cb4b30d6662a72f285653c014d4fa9ddad118a4a3bd866"
	I1229 07:39:34.541592  870002 cri.go:96] found id: "52c5419dcaef952ed0fbcdfc375a4dbfb6df0de2bfa13b1ab22c2eea4c5a2f67"
	I1229 07:39:34.541597  870002 cri.go:96] found id: "4d3b2913ce4c31183bfab38b2d955071916f1f9ccdab93f542f4bb6134e9028a"
	I1229 07:39:34.541601  870002 cri.go:96] found id: "f726d99564778479db4e619c295506ce8718cf7a9e044557c636e62a9921968b"
	I1229 07:39:34.541610  870002 cri.go:96] found id: "93170bc14e08075a82aa9b1cabcc864a8794415ed924f510a56c2eba299de2b9"
	I1229 07:39:34.541639  870002 cri.go:96] found id: ""
	I1229 07:39:34.541705  870002 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:39:34.561845  870002 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:39:34Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:39:34.561976  870002 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:39:34.569864  870002 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:39:34.569883  870002 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:39:34.569935  870002 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:39:34.577400  870002 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:39:34.578097  870002 kubeconfig.go:125] found "pause-199444" server: "https://192.168.76.2:8443"
	I1229 07:39:34.578980  870002 kapi.go:59] client config for pause-199444: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/client.key", CAFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:39:34.579500  870002 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1229 07:39:34.579520  870002 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1229 07:39:34.579525  870002 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1229 07:39:34.579530  870002 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1229 07:39:34.579537  870002 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1229 07:39:34.579541  870002 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1229 07:39:34.579903  870002 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:39:34.589026  870002 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1229 07:39:34.589103  870002 kubeadm.go:602] duration metric: took 19.213977ms to restartPrimaryControlPlane
	I1229 07:39:34.589120  870002 kubeadm.go:403] duration metric: took 77.962519ms to StartCluster
	I1229 07:39:34.589135  870002 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:34.589225  870002 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:39:34.590156  870002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:34.590397  870002 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:39:34.590689  870002 config.go:182] Loaded profile config "pause-199444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:39:34.590771  870002 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:39:34.596848  870002 out.go:179] * Enabled addons: 
	I1229 07:39:34.596848  870002 out.go:179] * Verifying Kubernetes components...
	I1229 07:39:34.599593  870002 addons.go:530] duration metric: took 8.815492ms for enable addons: enabled=[]
	I1229 07:39:34.599681  870002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:39:34.736979  870002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:39:34.750495  870002 node_ready.go:35] waiting up to 6m0s for node "pause-199444" to be "Ready" ...
	I1229 07:39:33.524939  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:33.542966  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:33.543026  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:33.543047  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:33.543080  870553 retry.go:84] will retry after 7.9s: couldn't verify container is exited. %v: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:38.309414  870002 node_ready.go:49] node "pause-199444" is "Ready"
	I1229 07:39:38.309449  870002 node_ready.go:38] duration metric: took 3.55890855s for node "pause-199444" to be "Ready" ...
	I1229 07:39:38.309463  870002 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:39:38.309521  870002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:39:38.322568  870002 api_server.go:72] duration metric: took 3.732133271s to wait for apiserver process to appear ...
	I1229 07:39:38.322596  870002 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:39:38.322615  870002 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:39:38.378890  870002 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1229 07:39:38.378919  870002 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1229 07:39:38.823496  870002 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:39:38.831828  870002 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:39:38.831869  870002 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:39:39.323441  870002 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:39:39.334382  870002 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:39:39.334411  870002 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:39:39.822736  870002 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1229 07:39:39.830904  870002 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1229 07:39:39.832249  870002 api_server.go:141] control plane version: v1.35.0
	I1229 07:39:39.832277  870002 api_server.go:131] duration metric: took 1.509674913s to wait for apiserver health ...
	I1229 07:39:39.832286  870002 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:39:39.836215  870002 system_pods.go:59] 7 kube-system pods found
	I1229 07:39:39.836253  870002 system_pods.go:61] "coredns-7d764666f9-fkkph" [ff98f4b2-06a7-4b08-b45e-ddaf75cc6ebd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:39:39.836261  870002 system_pods.go:61] "etcd-pause-199444" [6769c02a-a9e3-43c9-8075-1ed16e0331a3] Running
	I1229 07:39:39.836268  870002 system_pods.go:61] "kindnet-7kqm7" [b26d894d-b2f4-43a4-ab75-91e784b0888a] Running
	I1229 07:39:39.836274  870002 system_pods.go:61] "kube-apiserver-pause-199444" [4da3ca57-8553-4f05-8221-d1af098bdecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:39:39.836283  870002 system_pods.go:61] "kube-controller-manager-pause-199444" [b019d0dc-3c54-42ec-bceb-260028d175ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:39:39.836293  870002 system_pods.go:61] "kube-proxy-hjhq6" [878aafee-6668-4ad2-8c4c-dafd29e52775] Running
	I1229 07:39:39.836302  870002 system_pods.go:61] "kube-scheduler-pause-199444" [f9a4053a-62ac-46e4-980d-a84613506424] Running
	I1229 07:39:39.836310  870002 system_pods.go:74] duration metric: took 4.016021ms to wait for pod list to return data ...
	I1229 07:39:39.836322  870002 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:39:39.839670  870002 default_sa.go:45] found service account: "default"
	I1229 07:39:39.839699  870002 default_sa.go:55] duration metric: took 3.370746ms for default service account to be created ...
	I1229 07:39:39.839711  870002 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:39:39.842634  870002 system_pods.go:86] 7 kube-system pods found
	I1229 07:39:39.842670  870002 system_pods.go:89] "coredns-7d764666f9-fkkph" [ff98f4b2-06a7-4b08-b45e-ddaf75cc6ebd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:39:39.842679  870002 system_pods.go:89] "etcd-pause-199444" [6769c02a-a9e3-43c9-8075-1ed16e0331a3] Running
	I1229 07:39:39.842686  870002 system_pods.go:89] "kindnet-7kqm7" [b26d894d-b2f4-43a4-ab75-91e784b0888a] Running
	I1229 07:39:39.842692  870002 system_pods.go:89] "kube-apiserver-pause-199444" [4da3ca57-8553-4f05-8221-d1af098bdecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:39:39.842700  870002 system_pods.go:89] "kube-controller-manager-pause-199444" [b019d0dc-3c54-42ec-bceb-260028d175ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:39:39.842706  870002 system_pods.go:89] "kube-proxy-hjhq6" [878aafee-6668-4ad2-8c4c-dafd29e52775] Running
	I1229 07:39:39.842711  870002 system_pods.go:89] "kube-scheduler-pause-199444" [f9a4053a-62ac-46e4-980d-a84613506424] Running
	I1229 07:39:39.842717  870002 system_pods.go:126] duration metric: took 3.001437ms to wait for k8s-apps to be running ...
	I1229 07:39:39.842730  870002 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:39:39.842790  870002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:39:39.856700  870002 system_svc.go:56] duration metric: took 13.960576ms WaitForService to wait for kubelet
	I1229 07:39:39.856771  870002 kubeadm.go:587] duration metric: took 5.26634076s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:39:39.856837  870002 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:39:39.859451  870002 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:39:39.859487  870002 node_conditions.go:123] node cpu capacity is 2
	I1229 07:39:39.859501  870002 node_conditions.go:105] duration metric: took 2.645642ms to run NodePressure ...
	I1229 07:39:39.859524  870002 start.go:242] waiting for startup goroutines ...
	I1229 07:39:39.859536  870002 start.go:247] waiting for cluster config update ...
	I1229 07:39:39.859545  870002 start.go:256] writing updated cluster config ...
	I1229 07:39:39.859890  870002 ssh_runner.go:195] Run: rm -f paused
	I1229 07:39:39.863912  870002 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:39:39.864627  870002 kapi.go:59] client config for pause-199444: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/profiles/pause-199444/client.key", CAFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:39:39.867261  870002 pod_ready.go:83] waiting for pod "coredns-7d764666f9-fkkph" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:41.492320  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:41.514526  870553 cli_runner.go:211] docker container inspect missing-upgrade-865040 --format={{.State.Status}} returned with exit code 1
	I1229 07:39:41.514600  870553 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	I1229 07:39:41.514610  870553 oci.go:673] temporary error: container missing-upgrade-865040 status is  but expect it to be exited
	I1229 07:39:41.514644  870553 oci.go:88] couldn't shut down missing-upgrade-865040 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-865040": docker container inspect missing-upgrade-865040 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-865040
	 
	I1229 07:39:41.514706  870553 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-865040
	I1229 07:39:41.534406  870553 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-865040
	W1229 07:39:41.554747  870553 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-865040 returned with exit code 1
	I1229 07:39:41.554836  870553 cli_runner.go:164] Run: docker network inspect missing-upgrade-865040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:39:41.572095  870553 cli_runner.go:164] Run: docker network rm missing-upgrade-865040
	I1229 07:39:41.674344  870553 fix.go:124] Sleeping 1 second for extra luck!
	I1229 07:39:42.674502  870553 start.go:125] createHost starting for "" (driver="docker")
	W1229 07:39:41.872342  870002 pod_ready.go:104] pod "coredns-7d764666f9-fkkph" is not "Ready", error: <nil>
	W1229 07:39:43.872936  870002 pod_ready.go:104] pod "coredns-7d764666f9-fkkph" is not "Ready", error: <nil>
	W1229 07:39:45.874733  870002 pod_ready.go:104] pod "coredns-7d764666f9-fkkph" is not "Ready", error: <nil>
	I1229 07:39:42.677787  870553 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:39:42.677933  870553 start.go:159] libmachine.API.Create for "missing-upgrade-865040" (driver="docker")
	I1229 07:39:42.677967  870553 client.go:173] LocalClient.Create starting
	I1229 07:39:42.678055  870553 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 07:39:42.678094  870553 main.go:144] libmachine: Decoding PEM data...
	I1229 07:39:42.678119  870553 main.go:144] libmachine: Parsing certificate...
	I1229 07:39:42.678177  870553 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 07:39:42.678201  870553 main.go:144] libmachine: Decoding PEM data...
	I1229 07:39:42.678215  870553 main.go:144] libmachine: Parsing certificate...
	I1229 07:39:42.678472  870553 cli_runner.go:164] Run: docker network inspect missing-upgrade-865040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:39:42.695792  870553 cli_runner.go:211] docker network inspect missing-upgrade-865040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:39:42.695889  870553 network_create.go:284] running [docker network inspect missing-upgrade-865040] to gather additional debugging logs...
	I1229 07:39:42.695913  870553 cli_runner.go:164] Run: docker network inspect missing-upgrade-865040
	W1229 07:39:42.711230  870553 cli_runner.go:211] docker network inspect missing-upgrade-865040 returned with exit code 1
	I1229 07:39:42.711273  870553 network_create.go:287] error running [docker network inspect missing-upgrade-865040]: docker network inspect missing-upgrade-865040: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-865040 not found
	I1229 07:39:42.711291  870553 network_create.go:289] output of [docker network inspect missing-upgrade-865040]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-865040 not found
	
	** /stderr **
	I1229 07:39:42.711412  870553 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:39:42.728441  870553 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
	I1229 07:39:42.728842  870553 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce89f48ecd30 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:ee:74:81:93:1a} reservation:<nil>}
	I1229 07:39:42.729151  870553 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e2ff2b4ebfe IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:c7:24:a7:9b:5d} reservation:<nil>}
	I1229 07:39:42.729475  870553 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7c9898f8f370 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:36:2f:ca:8d:20} reservation:<nil>}
	I1229 07:39:42.729914  870553 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c27490}
	I1229 07:39:42.729935  870553 network_create.go:124] attempt to create docker network missing-upgrade-865040 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 07:39:42.729999  870553 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-865040 missing-upgrade-865040
	I1229 07:39:42.795401  870553 network_create.go:108] docker network missing-upgrade-865040 192.168.85.0/24 created
	I1229 07:39:42.795439  870553 kic.go:121] calculated static IP "192.168.85.2" for the "missing-upgrade-865040" container
	I1229 07:39:42.795514  870553 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:39:42.813649  870553 cli_runner.go:164] Run: docker volume create missing-upgrade-865040 --label name.minikube.sigs.k8s.io=missing-upgrade-865040 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:39:42.830639  870553 oci.go:103] Successfully created a docker volume missing-upgrade-865040
	I1229 07:39:42.830742  870553 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-865040-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-865040 --entrypoint /usr/bin/test -v missing-upgrade-865040:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1229 07:39:43.276301  870553 oci.go:107] Successfully prepared a docker volume missing-upgrade-865040
	I1229 07:39:43.276372  870553 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1229 07:39:43.276385  870553 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:39:43.276450  870553 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-865040:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:39:47.386801  870002 pod_ready.go:94] pod "coredns-7d764666f9-fkkph" is "Ready"
	I1229 07:39:47.386837  870002 pod_ready.go:86] duration metric: took 7.519552768s for pod "coredns-7d764666f9-fkkph" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:47.403051  870002 pod_ready.go:83] waiting for pod "etcd-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:48.411482  870002 pod_ready.go:94] pod "etcd-pause-199444" is "Ready"
	I1229 07:39:48.411518  870002 pod_ready.go:86] duration metric: took 1.008436645s for pod "etcd-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:48.414906  870002 pod_ready.go:83] waiting for pod "kube-apiserver-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	W1229 07:39:50.422260  870002 pod_ready.go:104] pod "kube-apiserver-pause-199444" is not "Ready", error: <nil>
	I1229 07:39:50.922407  870002 pod_ready.go:94] pod "kube-apiserver-pause-199444" is "Ready"
	I1229 07:39:50.922439  870002 pod_ready.go:86] duration metric: took 2.507503678s for pod "kube-apiserver-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:50.925331  870002 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:50.931600  870002 pod_ready.go:94] pod "kube-controller-manager-pause-199444" is "Ready"
	I1229 07:39:50.931635  870002 pod_ready.go:86] duration metric: took 6.272161ms for pod "kube-controller-manager-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:50.937323  870002 pod_ready.go:83] waiting for pod "kube-proxy-hjhq6" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:50.970585  870002 pod_ready.go:94] pod "kube-proxy-hjhq6" is "Ready"
	I1229 07:39:50.970611  870002 pod_ready.go:86] duration metric: took 33.261962ms for pod "kube-proxy-hjhq6" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:47.729359  870553 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-865040:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.452862641s)
	I1229 07:39:47.729391  870553 kic.go:203] duration metric: took 4.45300304s to extract preloaded images to volume ...
	W1229 07:39:47.729530  870553 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:39:47.729643  870553 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:39:47.790516  870553 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-865040 --name missing-upgrade-865040 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-865040 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-865040 --network missing-upgrade-865040 --ip 192.168.85.2 --volume missing-upgrade-865040:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I1229 07:39:48.134943  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Running}}
	I1229 07:39:48.157017  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	I1229 07:39:48.182002  870553 cli_runner.go:164] Run: docker exec missing-upgrade-865040 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:39:48.233499  870553 oci.go:144] the created container "missing-upgrade-865040" has a running status.
	I1229 07:39:48.233528  870553 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa...
	I1229 07:39:49.474621  870553 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:39:49.497506  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	I1229 07:39:49.519750  870553 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:39:49.519773  870553 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-865040 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:39:49.569306  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	I1229 07:39:49.589615  870553 machine.go:94] provisionDockerMachine start ...
	I1229 07:39:49.589706  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:49.608025  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:49.608471  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:49.608515  870553 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:39:49.734552  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-865040
	
	I1229 07:39:49.734578  870553 ubuntu.go:182] provisioning hostname "missing-upgrade-865040"
	I1229 07:39:49.734647  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:49.752636  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:49.753094  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:49.753115  870553 main.go:144] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-865040 && echo "missing-upgrade-865040" | sudo tee /etc/hostname
	I1229 07:39:49.892972  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-865040
	
	I1229 07:39:49.893053  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:49.911117  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:49.911426  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:49.911450  870553 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-865040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-865040/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-865040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:39:50.037479  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:39:50.037506  870553 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:39:50.037535  870553 ubuntu.go:190] setting up certificates
	I1229 07:39:50.037545  870553 provision.go:84] configureAuth start
	I1229 07:39:50.037607  870553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-865040
	I1229 07:39:50.057401  870553 provision.go:143] copyHostCerts
	I1229 07:39:50.057486  870553 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:39:50.057501  870553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:39:50.057584  870553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:39:50.057693  870553 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:39:50.057704  870553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:39:50.057732  870553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:39:50.057804  870553 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:39:50.057814  870553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:39:50.057840  870553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:39:50.057903  870553 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-865040 san=[127.0.0.1 192.168.85.2 localhost minikube missing-upgrade-865040]
	I1229 07:39:50.214609  870553 provision.go:177] copyRemoteCerts
	I1229 07:39:50.214677  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:39:50.214720  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:50.231830  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:50.322475  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:39:50.349051  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1229 07:39:50.373943  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:39:50.398269  870553 provision.go:87] duration metric: took 360.700468ms to configureAuth
	I1229 07:39:50.398295  870553 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:39:50.398520  870553 config.go:182] Loaded profile config "missing-upgrade-865040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1229 07:39:50.398626  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:50.421222  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:50.421834  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:50.421857  870553 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:39:50.707488  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:39:50.707512  870553 machine.go:97] duration metric: took 1.117874625s to provisionDockerMachine
	I1229 07:39:50.707523  870553 client.go:176] duration metric: took 8.029543769s to LocalClient.Create
	I1229 07:39:50.707563  870553 start.go:167] duration metric: took 8.029630514s to libmachine.API.Create "missing-upgrade-865040"
	I1229 07:39:50.707577  870553 start.go:293] postStartSetup for "missing-upgrade-865040" (driver="docker")
	I1229 07:39:50.707588  870553 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:39:50.707674  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:39:50.707734  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:50.727659  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:50.823092  870553 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:39:50.826788  870553 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:39:50.826860  870553 main.go:144] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1229 07:39:50.826878  870553 main.go:144] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1229 07:39:50.826892  870553 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1229 07:39:50.826903  870553 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:39:50.826986  870553 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:39:50.827125  870553 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:39:50.827278  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:39:50.838326  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:39:50.867199  870553 start.go:296] duration metric: took 159.605997ms for postStartSetup
	I1229 07:39:50.867646  870553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-865040
	I1229 07:39:50.885216  870553 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/config.json ...
	I1229 07:39:50.885497  870553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:39:50.885550  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:50.903175  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:50.993615  870553 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:39:50.998177  870553 start.go:128] duration metric: took 8.323636628s to createHost
	I1229 07:39:50.998268  870553 cli_runner.go:164] Run: docker container inspect missing-upgrade-865040 --format={{.State.Status}}
	W1229 07:39:51.019223  870553 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:39:51.019256  870553 machine.go:94] provisionDockerMachine start ...
	I1229 07:39:51.019338  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:51.036306  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:51.036638  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:51.036654  870553 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:39:51.164308  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-865040
	
	I1229 07:39:51.164377  870553 ubuntu.go:182] provisioning hostname "missing-upgrade-865040"
	I1229 07:39:51.164478  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:51.183850  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:51.184151  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:51.184162  870553 main.go:144] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-865040 && echo "missing-upgrade-865040" | sudo tee /etc/hostname
	I1229 07:39:51.325143  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-865040
	
	I1229 07:39:51.325253  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:51.343812  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:51.344128  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:51.344152  870553 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-865040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-865040/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-865040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:39:51.468937  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:39:51.468963  870553 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:39:51.468983  870553 ubuntu.go:190] setting up certificates
	I1229 07:39:51.468994  870553 provision.go:84] configureAuth start
	I1229 07:39:51.469069  870553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-865040
	I1229 07:39:51.486534  870553 provision.go:143] copyHostCerts
	I1229 07:39:51.486612  870553 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:39:51.486627  870553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:39:51.486693  870553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:39:51.486790  870553 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:39:51.486800  870553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:39:51.486835  870553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:39:51.486936  870553 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:39:51.486957  870553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:39:51.486987  870553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:39:51.487052  870553 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-865040 san=[127.0.0.1 192.168.85.2 localhost minikube missing-upgrade-865040]
	I1229 07:39:51.920985  870553 provision.go:177] copyRemoteCerts
	I1229 07:39:51.921055  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:39:51.921111  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:51.943850  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:52.034958  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:39:52.060427  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1229 07:39:52.086622  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:39:52.113285  870553 provision.go:87] duration metric: took 644.270214ms to configureAuth
	I1229 07:39:52.113315  870553 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:39:52.113532  870553 config.go:182] Loaded profile config "missing-upgrade-865040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1229 07:39:52.113639  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:52.130856  870553 main.go:144] libmachine: Using SSH client type: native
	I1229 07:39:52.131168  870553 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33738 <nil> <nil>}
	I1229 07:39:52.131189  870553 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:39:52.396162  870553 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:39:52.396187  870553 machine.go:97] duration metric: took 1.376923117s to provisionDockerMachine
	I1229 07:39:52.396199  870553 start.go:293] postStartSetup for "missing-upgrade-865040" (driver="docker")
	I1229 07:39:52.396210  870553 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:39:52.396335  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:39:52.396395  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:52.414311  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:52.506252  870553 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:39:52.509648  870553 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:39:52.509687  870553 main.go:144] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1229 07:39:52.509697  870553 main.go:144] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1229 07:39:52.509705  870553 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1229 07:39:52.509716  870553 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:39:52.509775  870553 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:39:52.509856  870553 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:39:52.509966  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:39:52.518869  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:39:52.544295  870553 start.go:296] duration metric: took 148.079246ms for postStartSetup
	I1229 07:39:52.544393  870553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:39:52.544443  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:52.561755  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:52.649416  870553 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:39:52.654034  870553 fix.go:56] duration metric: took 31.150685323s for fixHost
	I1229 07:39:52.654061  870553 start.go:83] releasing machines lock for "missing-upgrade-865040", held for 31.150738936s
	I1229 07:39:52.654132  870553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-865040
	I1229 07:39:52.671593  870553 ssh_runner.go:195] Run: cat /version.json
	I1229 07:39:52.671633  870553 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:39:52.671643  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:52.671699  870553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-865040
	I1229 07:39:52.700311  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	I1229 07:39:52.701614  870553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33738 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/missing-upgrade-865040/id_rsa Username:docker}
	W1229 07:39:52.916025  870553 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.37.0
	I1229 07:39:52.916117  870553 ssh_runner.go:195] Run: systemctl --version
	I1229 07:39:52.922023  870553 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:39:53.080632  870553 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1229 07:39:53.086398  870553 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:39:53.113763  870553 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1229 07:39:53.113839  870553 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:39:53.151035  870553 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1229 07:39:53.151064  870553 start.go:496] detecting cgroup driver to use...
	I1229 07:39:53.151120  870553 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:39:53.151192  870553 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:39:53.178874  870553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:39:53.191638  870553 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:39:53.191730  870553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:39:53.206743  870553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:39:53.224026  870553 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:39:53.327912  870553 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:39:53.437779  870553 docker.go:234] disabling docker service ...
	I1229 07:39:53.437848  870553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:39:53.459800  870553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:39:53.473060  870553 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:39:53.566352  870553 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:39:53.669501  870553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:39:53.682277  870553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:39:53.699964  870553 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1229 07:39:53.700075  870553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.710747  870553 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:39:53.710827  870553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.721224  870553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.732678  870553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.743110  870553 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:39:53.752382  870553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.762401  870553 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.778546  870553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:39:53.788715  870553 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:39:53.797280  870553 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:39:53.806111  870553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:39:53.907472  870553 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:39:54.039351  870553 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:39:54.039468  870553 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:39:54.043420  870553 start.go:574] Will wait 60s for crictl version
	I1229 07:39:54.043519  870553 ssh_runner.go:195] Run: which crictl
	I1229 07:39:54.047252  870553 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1229 07:39:54.090168  870553 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1229 07:39:54.090293  870553 ssh_runner.go:195] Run: crio --version
	I1229 07:39:54.130049  870553 ssh_runner.go:195] Run: crio --version
	I1229 07:39:54.176684  870553 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	I1229 07:39:51.171633  870002 pod_ready.go:83] waiting for pod "kube-scheduler-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	W1229 07:39:53.178122  870002 pod_ready.go:104] pod "kube-scheduler-pause-199444" is not "Ready", error: <nil>
	I1229 07:39:55.178119  870002 pod_ready.go:94] pod "kube-scheduler-pause-199444" is "Ready"
	I1229 07:39:55.178142  870002 pod_ready.go:86] duration metric: took 4.00647985s for pod "kube-scheduler-pause-199444" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:39:55.178168  870002 pod_ready.go:40] duration metric: took 15.314207191s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:39:55.262731  870002 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1229 07:39:55.266556  870002 out.go:203] 
	W1229 07:39:55.269901  870002 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1229 07:39:55.273732  870002 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:39:55.276737  870002 out.go:179] * Done! kubectl is now configured to use "pause-199444" cluster and "default" namespace by default
	I1229 07:39:54.179480  870553 cli_runner.go:164] Run: docker network inspect missing-upgrade-865040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:39:54.196258  870553 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:39:54.199969  870553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:39:54.210977  870553 kubeadm.go:884] updating cluster {Name:missing-upgrade-865040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-865040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:39:54.211096  870553 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1229 07:39:54.211155  870553 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:39:54.306544  870553 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:39:54.306570  870553 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:39:54.306626  870553 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:39:54.381590  870553 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:39:54.381615  870553 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:39:54.381623  870553 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.32.0 crio true true} ...
	I1229 07:39:54.381719  870553 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=missing-upgrade-865040 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-865040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:39:54.381814  870553 ssh_runner.go:195] Run: crio config
	I1229 07:39:54.447500  870553 cni.go:84] Creating CNI manager for ""
	I1229 07:39:54.447521  870553 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:39:54.447537  870553 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:39:54.447562  870553 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-865040 NodeName:missing-upgrade-865040 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:39:54.447693  870553 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-865040"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:39:54.447766  870553 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1229 07:39:54.458373  870553 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:39:54.458480  870553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:39:54.468686  870553 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1229 07:39:54.488565  870553 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:39:54.508914  870553 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1229 07:39:54.529545  870553 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:39:54.533553  870553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:39:54.545660  870553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:39:54.640517  870553 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:39:54.654809  870553 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040 for IP: 192.168.85.2
	I1229 07:39:54.654834  870553 certs.go:195] generating shared ca certs ...
	I1229 07:39:54.654851  870553 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:39:54.654996  870553 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:39:54.655046  870553 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:39:54.655057  870553 certs.go:257] generating profile certs ...
	I1229 07:39:54.655142  870553 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/client.key
	I1229 07:39:54.655199  870553 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/apiserver.key.2348e427
	I1229 07:39:54.655241  870553 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/proxy-client.key
	I1229 07:39:54.655357  870553 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:39:54.655396  870553 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:39:54.655409  870553 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:39:54.655435  870553 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:39:54.655462  870553 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:39:54.655496  870553 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:39:54.655543  870553 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:39:54.656140  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:39:54.693169  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:39:54.719600  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:39:54.747771  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:39:54.772318  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1229 07:39:54.796972  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:39:54.825770  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:39:54.854205  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:39:54.882164  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:39:54.909575  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:39:54.953719  870553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:39:54.982344  870553 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:39:55.001441  870553 ssh_runner.go:195] Run: openssl version
	I1229 07:39:55.012749  870553 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:39:55.023130  870553 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:39:55.033239  870553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:39:55.037124  870553 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:39:55.037193  870553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:39:55.044883  870553 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:39:55.053719  870553 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/741854.pem /etc/ssl/certs/51391683.0
	I1229 07:39:55.062369  870553 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:39:55.071009  870553 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:39:55.080028  870553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:39:55.084035  870553 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:39:55.084104  870553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:39:55.091418  870553 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:39:55.100109  870553 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7418542.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:39:55.108891  870553 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:55.117522  870553 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:39:55.125840  870553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:55.129154  870553 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:55.129214  870553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:39:55.136288  870553 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:39:55.145045  870553 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:39:55.153449  870553 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:39:55.157029  870553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:39:55.164224  870553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:39:55.172593  870553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:39:55.180375  870553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:39:55.188626  870553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:39:55.196380  870553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:39:55.204421  870553 kubeadm.go:401] StartCluster: {Name:missing-upgrade-865040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-865040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:39:55.204504  870553 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:39:55.204563  870553 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:39:55.251514  870553 cri.go:96] found id: ""
	I1229 07:39:55.251586  870553 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:39:55.262783  870553 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:39:55.262805  870553 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:39:55.262859  870553 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:39:55.272205  870553 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:39:55.272910  870553 kubeconfig.go:125] found "missing-upgrade-865040" server: "https://192.168.85.2:8443"
	I1229 07:39:55.273932  870553 kapi.go:59] client config for missing-upgrade-865040: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/client.crt", KeyFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/profiles/missing-upgrade-865040/client.key", CAFile:"/home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1229 07:39:55.274413  870553 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1229 07:39:55.274433  870553 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1229 07:39:55.274440  870553 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1229 07:39:55.274445  870553 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1229 07:39:55.274449  870553 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1229 07:39:55.274454  870553 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1229 07:39:55.276872  870553 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:39:55.288513  870553 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-29 07:39:00.880280752 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-29 07:39:54.520885384 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1229 07:39:55.288584  870553 kubeadm.go:1161] stopping kube-system containers ...
	I1229 07:39:55.288600  870553 cri.go:61] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1229 07:39:55.288661  870553 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:39:55.378022  870553 cri.go:96] found id: ""
	I1229 07:39:55.378138  870553 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1229 07:39:55.403860  870553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:39:55.417546  870553 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:39:55.417576  870553 kubeadm.go:158] found existing configuration files:
	
	I1229 07:39:55.417630  870553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:39:55.427565  870553 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:39:55.427637  870553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:39:55.436754  870553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:39:55.448374  870553 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:39:55.448462  870553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:39:55.457854  870553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:39:55.468956  870553 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:39:55.469031  870553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:39:55.479784  870553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:39:55.493510  870553 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:39:55.493581  870553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:39:55.503642  870553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:39:55.514599  870553 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:39:55.584754  870553 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:39:58.703569  870553 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.118777595s)
	I1229 07:39:58.703645  870553 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:39:58.919134  870553 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:39:59.013397  870553 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1229 07:39:59.111078  870553 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:39:59.111152  870553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:39:59.611336  870553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:40:00.111856  870553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:40:00.181898  870553 api_server.go:72] duration metric: took 1.070828393s to wait for apiserver process to appear ...
	I1229 07:40:00.181923  870553 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:40:00.181945  870553 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:40:00.182293  870553 api_server.go:315] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1229 07:40:00.682464  870553 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	
	
	==> CRI-O <==
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.333341454Z" level=info msg="Created container 15086f7244d65b56329814193b4549ee1080b3bc8810f90f2dd6db1f43fe3ac3: kube-system/kube-proxy-hjhq6/kube-proxy" id=f6f7bc62-5e0c-43e0-902e-2c2ff713ca68 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.337936706Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.338627758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.339169344Z" level=info msg="Starting container: d8fb8a5d7e7c982237067aa6ca2784f4526f1e7902940d253249fe6da49f3c99" id=e31ed663-3616-4b95-ae2b-297baceec57d name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.347879481Z" level=info msg="Starting container: 15086f7244d65b56329814193b4549ee1080b3bc8810f90f2dd6db1f43fe3ac3" id=2277c8ea-4276-4d66-a5bf-2dedb2f28fa6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.349081504Z" level=info msg="Created container 03d69afbac915ecaa7aa23324b9dc7e6d51c6931c9f2072498fd818d409f7658: kube-system/coredns-7d764666f9-fkkph/coredns" id=dc780e17-08c1-44d6-8aff-d9daf394ec43 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.349444684Z" level=info msg="Started container" PID=2405 containerID=d8fb8a5d7e7c982237067aa6ca2784f4526f1e7902940d253249fe6da49f3c99 description=kube-system/kindnet-7kqm7/kindnet-cni id=e31ed663-3616-4b95-ae2b-297baceec57d name=/runtime.v1.RuntimeService/StartContainer sandboxID=9040c0a1cf0b65b1ce27cf7a85bf3a80b5ada57a7c2e2ed47e4afbd7b7f6ccda
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.353925555Z" level=info msg="Started container" PID=2384 containerID=15086f7244d65b56329814193b4549ee1080b3bc8810f90f2dd6db1f43fe3ac3 description=kube-system/kube-proxy-hjhq6/kube-proxy id=2277c8ea-4276-4d66-a5bf-2dedb2f28fa6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b04efd932b4bd0a3bd6d83a0452fadcfb2117daf081dad4fa81374db796f18d
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.354266974Z" level=info msg="Starting container: 03d69afbac915ecaa7aa23324b9dc7e6d51c6931c9f2072498fd818d409f7658" id=19e0db99-16c0-4f0c-8843-3fada19b2589 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.370346482Z" level=info msg="Started container" PID=2406 containerID=03d69afbac915ecaa7aa23324b9dc7e6d51c6931c9f2072498fd818d409f7658 description=kube-system/coredns-7d764666f9-fkkph/coredns id=19e0db99-16c0-4f0c-8843-3fada19b2589 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a29405ea67b5e08d4be88f3e1a5fecd172741bc10625896d96d6404fcd5b027
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.374893241Z" level=info msg="Created container 3576f4eae5c506a21c79802f263df1c18c2d4c377563be9a284fc12ec7165272: kube-system/kube-scheduler-pause-199444/kube-scheduler" id=912e6b06-2be1-41dd-b88d-c483468d7a45 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.377733134Z" level=info msg="Starting container: 3576f4eae5c506a21c79802f263df1c18c2d4c377563be9a284fc12ec7165272" id=79b18b5a-bb67-4ad0-9156-c0e44a743b25 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.386268073Z" level=info msg="Started container" PID=2414 containerID=3576f4eae5c506a21c79802f263df1c18c2d4c377563be9a284fc12ec7165272 description=kube-system/kube-scheduler-pause-199444/kube-scheduler id=79b18b5a-bb67-4ad0-9156-c0e44a743b25 name=/runtime.v1.RuntimeService/StartContainer sandboxID=749afe2ac835f477a5e4c942d7bd486d01f64dbee5af7ccc54e6dfc2d6818d1b
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.403930812Z" level=info msg="Created container e60680b8c59f9f0cdc28062e8f0100473d54a7f8e0dd3498da8e0e485e2b1af3: kube-system/kube-controller-manager-pause-199444/kube-controller-manager" id=02348c48-39bd-444b-a73e-d53cab839b6e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.404743982Z" level=info msg="Starting container: e60680b8c59f9f0cdc28062e8f0100473d54a7f8e0dd3498da8e0e485e2b1af3" id=93978e4e-59eb-4073-a3f5-32f90ab1addb name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:39:35 pause-199444 crio[2102]: time="2025-12-29T07:39:35.407030457Z" level=info msg="Started container" PID=2435 containerID=e60680b8c59f9f0cdc28062e8f0100473d54a7f8e0dd3498da8e0e485e2b1af3 description=kube-system/kube-controller-manager-pause-199444/kube-controller-manager id=93978e4e-59eb-4073-a3f5-32f90ab1addb name=/runtime.v1.RuntimeService/StartContainer sandboxID=436173e1b5447055745b061f40ca19be5b4b572c8a6175c0d127357a043f97dc
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.680076224Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.68018857Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.689344661Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.68951798Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.700699833Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.700738684Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.700773163Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.706947993Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:39:45 pause-199444 crio[2102]: time="2025-12-29T07:39:45.707109374Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	e60680b8c59f9       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     27 seconds ago       Running             kube-controller-manager   1                   436173e1b5447       kube-controller-manager-pause-199444   kube-system
	3576f4eae5c50       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     27 seconds ago       Running             kube-scheduler            1                   749afe2ac835f       kube-scheduler-pause-199444            kube-system
	d8fb8a5d7e7c9       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     27 seconds ago       Running             kindnet-cni               1                   9040c0a1cf0b6       kindnet-7kqm7                          kube-system
	03d69afbac915       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     27 seconds ago       Running             coredns                   1                   6a29405ea67b5       coredns-7d764666f9-fkkph               kube-system
	14a2cd392b80e       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     27 seconds ago       Running             kube-apiserver            1                   3e165d0448270       kube-apiserver-pause-199444            kube-system
	15086f7244d65       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     27 seconds ago       Running             kube-proxy                1                   9b04efd932b4b       kube-proxy-hjhq6                       kube-system
	2bd65abc5d48c       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     27 seconds ago       Running             etcd                      1                   e137deeda405a       etcd-pause-199444                      kube-system
	de742ed40b650       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     49 seconds ago       Exited              coredns                   0                   6a29405ea67b5       coredns-7d764666f9-fkkph               kube-system
	1044538f90cb7       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   About a minute ago   Exited              kindnet-cni               0                   9040c0a1cf0b6       kindnet-7kqm7                          kube-system
	c4bead8515e4b       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     About a minute ago   Exited              kube-proxy                0                   9b04efd932b4b       kube-proxy-hjhq6                       kube-system
	52c5419dcaef9       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     About a minute ago   Exited              etcd                      0                   e137deeda405a       etcd-pause-199444                      kube-system
	4d3b2913ce4c3       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     About a minute ago   Exited              kube-controller-manager   0                   436173e1b5447       kube-controller-manager-pause-199444   kube-system
	f726d99564778       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     About a minute ago   Exited              kube-apiserver            0                   3e165d0448270       kube-apiserver-pause-199444            kube-system
	93170bc14e080       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     About a minute ago   Exited              kube-scheduler            0                   749afe2ac835f       kube-scheduler-pause-199444            kube-system
	
	
	==> coredns [03d69afbac915ecaa7aa23324b9dc7e6d51c6931c9f2072498fd818d409f7658] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:52667 - 26930 "HINFO IN 8759021534572539469.6846005689263914927. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030762234s
	
	
	==> coredns [de742ed40b6505777453ab7944380979c7b0c8ba97e9f0a3357a7a6974c325a0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:50911 - 40462 "HINFO IN 2006846187992352177.8836940632566329301. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007964266s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-199444
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-199444
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=pause-199444
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_38_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:38:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-199444
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:39:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:39:54 +0000   Mon, 29 Dec 2025 07:38:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:39:54 +0000   Mon, 29 Dec 2025 07:38:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:39:54 +0000   Mon, 29 Dec 2025 07:38:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:39:54 +0000   Mon, 29 Dec 2025 07:39:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-199444
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                8bf55811-e14d-4a6d-85ea-f081bde819fb
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-fkkph                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     65s
	  kube-system                 etcd-pause-199444                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         70s
	  kube-system                 kindnet-7kqm7                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      66s
	  kube-system                 kube-apiserver-pause-199444             250m (12%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-controller-manager-pause-199444    200m (10%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-proxy-hjhq6                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-scheduler-pause-199444             100m (5%)     0 (0%)      0 (0%)           0 (0%)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  67s   node-controller  Node pause-199444 event: Registered Node pause-199444 in Controller
	  Normal  RegisteredNode  22s   node-controller  Node pause-199444 event: Registered Node pause-199444 in Controller
	
	
	==> dmesg <==
	[Dec29 07:18] overlayfs: idmapped layers are currently not supported
	[Dec29 07:19] overlayfs: idmapped layers are currently not supported
	[Dec29 07:20] overlayfs: idmapped layers are currently not supported
	[Dec29 07:21] overlayfs: idmapped layers are currently not supported
	[  +3.417224] overlayfs: idmapped layers are currently not supported
	[Dec29 07:22] overlayfs: idmapped layers are currently not supported
	[ +38.497546] overlayfs: idmapped layers are currently not supported
	[Dec29 07:23] overlayfs: idmapped layers are currently not supported
	[  +3.471993] overlayfs: idmapped layers are currently not supported
	[Dec29 07:24] overlayfs: idmapped layers are currently not supported
	[Dec29 07:25] overlayfs: idmapped layers are currently not supported
	[Dec29 07:26] overlayfs: idmapped layers are currently not supported
	[Dec29 07:29] overlayfs: idmapped layers are currently not supported
	[Dec29 07:30] overlayfs: idmapped layers are currently not supported
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2bd65abc5d48c1e3a0fe601d1b60601ced20d3e6a4564fa69c349bf8a0a1e1eb] <==
	{"level":"info","ts":"2025-12-29T07:39:35.334609Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:39:35.334670Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:39:35.336280Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-29T07:39:35.336527Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:39:35.336548Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:39:35.336620Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-29T07:39:35.336627Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-29T07:39:35.912838Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:39:35.912949Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:39:35.916847Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-29T07:39:35.916922Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:39:35.916967Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:39:35.920854Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-29T07:39:35.920929Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:39:35.920985Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:39:35.921021Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-29T07:39:35.924983Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-199444 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:39:35.925187Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:39:35.926143Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:39:35.933981Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:39:35.940842Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:39:35.940890Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:39:35.941336Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:39:35.942004Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:39:35.945538Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> etcd [52c5419dcaef952ed0fbcdfc375a4dbfb6df0de2bfa13b1ab22c2eea4c5a2f67] <==
	{"level":"info","ts":"2025-12-29T07:38:46.330130Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-29T07:38:46.345603Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:38:46.346529Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:38:46.361762Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:38:46.380047Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:38:46.395050Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T07:38:46.395172Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-29T07:39:18.107609Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-29T07:39:18.107659Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-199444","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-29T07:39:18.107789Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-29T07:39:18.399731Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-29T07:39:18.401318Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-29T07:39:18.401364Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-29T07:39:18.401451Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-29T07:39:18.401554Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-29T07:39:18.401595Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-29T07:39:18.401455Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-29T07:39:18.401684Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-29T07:39:18.401481Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-29T07:39:18.401808Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-29T07:39:18.401824Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-29T07:39:18.405268Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-29T07:39:18.405386Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-29T07:39:18.405419Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-29T07:39:18.405429Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-199444","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 07:40:03 up  4:22,  0 user,  load average: 3.19, 2.34, 2.46
	Linux pause-199444 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1044538f90cb727baf9a757bac77301f44852ad91704ddc8ca63cd3258db41f7] <==
	I1229 07:39:02.161113       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:39:02.161694       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1229 07:39:02.161869       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:39:02.161944       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:39:02.161961       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:39:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:39:02.358182       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:39:02.358209       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:39:02.358218       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:39:02.359722       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:39:02.558293       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:39:02.558325       1 metrics.go:72] Registering metrics
	I1229 07:39:02.558383       1 controller.go:711] "Syncing nftables rules"
	I1229 07:39:12.358036       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 07:39:12.358093       1 main.go:301] handling current node
	
	
	==> kindnet [d8fb8a5d7e7c982237067aa6ca2784f4526f1e7902940d253249fe6da49f3c99] <==
	I1229 07:39:35.462856       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:39:35.463806       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1229 07:39:35.465004       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:39:35.465063       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:39:35.465101       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:39:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:39:35.669282       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:39:35.669350       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:39:35.678035       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:39:35.679301       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1229 07:39:38.456932       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1229 07:39:38.457185       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1229 07:39:38.457135       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1229 07:39:38.463214       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1229 07:39:40.079285       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:39:40.079370       1 metrics.go:72] Registering metrics
	I1229 07:39:40.079481       1 controller.go:711] "Syncing nftables rules"
	I1229 07:39:45.670319       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 07:39:45.670467       1 main.go:301] handling current node
	I1229 07:39:55.669577       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 07:39:55.669611       1 main.go:301] handling current node
	
	
	==> kube-apiserver [14a2cd392b80e34aa9627f104907fb18c7d7c86a7252093b1c9179bc59c7cc58] <==
	I1229 07:39:38.504382       1 policy_source.go:248] refreshing policies
	I1229 07:39:38.504604       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:39:38.504660       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:39:38.504772       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:38.506167       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1229 07:39:38.506450       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:39:38.511743       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:38.511970       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:38.512863       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:39:38.512917       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:39:38.513007       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:39:38.513051       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:39:38.513230       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:39:38.513340       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:39:38.513806       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:39:38.514196       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1229 07:39:38.514242       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1229 07:39:38.516124       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1229 07:39:38.522713       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:39:39.188249       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:39:40.347726       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:39:41.810102       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:39:41.861184       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:39:41.913563       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:39:42.020945       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [f726d99564778479db4e619c295506ce8718cf7a9e044557c636e62a9921968b] <==
	W1229 07:39:18.127796       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.127854       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.127900       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128008       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128086       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128185       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128274       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128371       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128465       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128557       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128653       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.128906       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.132051       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.133038       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.133576       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.133714       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.133847       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.133970       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.134137       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.134577       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.127914       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.135299       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.135396       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1229 07:39:18.127942       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1229 07:39:18.138917       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [4d3b2913ce4c31183bfab38b2d955071916f1f9ccdab93f542f4bb6134e9028a] <==
	I1229 07:38:56.952586       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.952593       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.952600       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.952605       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.952613       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.952624       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.952631       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.970828       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.971062       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.971275       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.971476       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.971637       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.971707       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.973529       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.973631       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.974326       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:56.990894       1 range_allocator.go:433] "Set node PodCIDR" node="pause-199444" podCIDRs=["10.244.0.0/24"]
	I1229 07:38:56.991025       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:57.020690       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:38:57.072477       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:57.154272       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:57.154296       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:38:57.154301       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:38:57.221776       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:16.949471       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [e60680b8c59f9f0cdc28062e8f0100473d54a7f8e0dd3498da8e0e485e2b1af3] <==
	I1229 07:39:41.540229       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540241       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540249       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540255       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540270       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540277       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540286       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540299       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.546306       1 range_allocator.go:177] "Sending events to api server"
	I1229 07:39:41.546352       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1229 07:39:41.546381       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:39:41.546411       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540305       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540313       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540403       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540411       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540535       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540545       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540552       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.540558       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.560879       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.625824       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.639094       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:41.639121       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:39:41.639128       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [15086f7244d65b56329814193b4549ee1080b3bc8810f90f2dd6db1f43fe3ac3] <==
	I1229 07:39:36.349139       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:39:36.442664       1 shared_informer.go:370] "Waiting for caches to sync"
	E1229 07:39:38.493072       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-199444\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	I1229 07:39:40.043912       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:40.044054       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1229 07:39:40.044182       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:39:40.064356       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:39:40.064505       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:39:40.072143       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:39:40.072458       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:39:40.072482       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:39:40.073688       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:39:40.073707       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:39:40.073988       1 config.go:200] "Starting service config controller"
	I1229 07:39:40.074005       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:39:40.074560       1 config.go:309] "Starting node config controller"
	I1229 07:39:40.074617       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:39:40.074647       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:39:40.074905       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:39:40.074923       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:39:40.174666       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:39:40.174708       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:39:40.174994       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c4bead8515e4b5fe06cb4b30d6662a72f285653c014d4fa9ddad118a4a3bd866] <==
	I1229 07:38:58.691960       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:38:58.796684       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:38:58.897705       1 shared_informer.go:377] "Caches are synced"
	I1229 07:38:58.897741       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1229 07:38:58.897808       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:38:59.159416       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:38:59.159471       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:38:59.167002       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:38:59.167282       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:38:59.167297       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:38:59.168698       1 config.go:200] "Starting service config controller"
	I1229 07:38:59.168708       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:38:59.168724       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:38:59.168727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:38:59.168746       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:38:59.168750       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:38:59.189014       1 config.go:309] "Starting node config controller"
	I1229 07:38:59.189033       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:38:59.189049       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:38:59.269297       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:38:59.269335       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:38:59.269367       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3576f4eae5c506a21c79802f263df1c18c2d4c377563be9a284fc12ec7165272] <==
	I1229 07:39:38.437319       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:39:38.437413       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:39:38.439686       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:39:38.440948       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:39:38.441023       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:39:38.441064       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1229 07:39:38.481442       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:39:38.481738       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:39:38.481842       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:39:38.481935       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:39:38.482029       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:39:38.482117       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:39:38.482229       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:39:38.482317       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:39:38.482410       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:39:38.482530       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:39:38.482625       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:39:38.482735       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:39:38.482843       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:39:38.484092       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:39:38.484212       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:39:38.484302       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:39:38.484456       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:39:38.484562       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	I1229 07:39:38.541146       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [93170bc14e08075a82aa9b1cabcc864a8794415ed924f510a56c2eba299de2b9] <==
	E1229 07:38:50.181516       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:38:50.224018       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:38:50.278892       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1229 07:38:50.338707       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:38:50.478721       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:38:50.479582       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:38:50.481738       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:38:50.510240       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:38:50.545641       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:38:50.553302       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:38:50.559157       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:38:50.623628       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:38:50.649449       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:38:50.697839       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:38:50.744663       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:38:50.764211       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:38:50.845290       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:38:50.950511       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	I1229 07:38:53.049459       1 shared_informer.go:377] "Caches are synced"
	I1229 07:39:18.093905       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1229 07:39:18.093936       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1229 07:39:18.093954       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1229 07:39:18.093981       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:39:18.094115       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1229 07:39:18.094130       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.282331    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-199444" containerName="etcd"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.286904    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-199444\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="86d281f7085150a3161efc5f782fdcf0" pod="kube-system/kube-apiserver-pause-199444"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.287878    1299 reflector.go:204] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-199444\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.306605    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-199444\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="6a38558238ecd1074e40bc4051c40351" pod="kube-system/kube-controller-manager-pause-199444"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.315827    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-7kqm7\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="b26d894d-b2f4-43a4-ab75-91e784b0888a" pod="kube-system/kindnet-7kqm7"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.345684    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-hjhq6\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="878aafee-6668-4ad2-8c4c-dafd29e52775" pod="kube-system/kube-proxy-hjhq6"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.368689    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-fkkph\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="ff98f4b2-06a7-4b08-b45e-ddaf75cc6ebd" pod="kube-system/coredns-7d764666f9-fkkph"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.380518    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-199444\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="cde101d16277a96d4afdbe3e8f9dbe1f" pod="kube-system/etcd-pause-199444"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.387289    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-199444\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="0a5b0e807aadce20784c5fa28ad32717" pod="kube-system/kube-scheduler-pause-199444"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.393023    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-199444\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="cde101d16277a96d4afdbe3e8f9dbe1f" pod="kube-system/etcd-pause-199444"
	Dec 29 07:39:38 pause-199444 kubelet[1299]: E1229 07:39:38.394449    1299 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-199444\" is forbidden: User \"system:node:pause-199444\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-199444' and this object" podUID="0a5b0e807aadce20784c5fa28ad32717" pod="kube-system/kube-scheduler-pause-199444"
	Dec 29 07:39:40 pause-199444 kubelet[1299]: E1229 07:39:40.805158    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-199444" containerName="kube-apiserver"
	Dec 29 07:39:43 pause-199444 kubelet[1299]: W1229 07:39:43.148692    1299 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 29 07:39:44 pause-199444 kubelet[1299]: E1229 07:39:44.917621    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-199444" containerName="kube-scheduler"
	Dec 29 07:39:47 pause-199444 kubelet[1299]: E1229 07:39:47.283065    1299 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-fkkph" containerName="coredns"
	Dec 29 07:39:47 pause-199444 kubelet[1299]: E1229 07:39:47.656820    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-199444" containerName="kube-controller-manager"
	Dec 29 07:39:48 pause-199444 kubelet[1299]: E1229 07:39:48.268637    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-199444" containerName="etcd"
	Dec 29 07:39:48 pause-199444 kubelet[1299]: E1229 07:39:48.310259    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-199444" containerName="etcd"
	Dec 29 07:39:50 pause-199444 kubelet[1299]: E1229 07:39:50.815797    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-199444" containerName="kube-apiserver"
	Dec 29 07:39:51 pause-199444 kubelet[1299]: E1229 07:39:51.318147    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-199444" containerName="kube-apiserver"
	Dec 29 07:39:54 pause-199444 kubelet[1299]: E1229 07:39:54.928208    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-199444" containerName="kube-scheduler"
	Dec 29 07:39:55 pause-199444 kubelet[1299]: E1229 07:39:55.326696    1299 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-199444" containerName="kube-scheduler"
	Dec 29 07:39:55 pause-199444 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:39:55 pause-199444 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:39:55 pause-199444 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-199444 -n pause-199444
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-199444 -n pause-199444: exit status 2 (563.012991ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-199444 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (9.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-512867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-512867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.416567ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:52:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-512867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-512867 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-512867 describe deploy/metrics-server -n kube-system: exit status 1 (90.860936ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-512867 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-512867
helpers_test.go:244: (dbg) docker inspect old-k8s-version-512867:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f",
	        "Created": "2025-12-29T07:51:42.199419041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 920016,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:51:42.270192384Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/hostname",
	        "HostsPath": "/var/lib/docker/containers/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/hosts",
	        "LogPath": "/var/lib/docker/containers/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f-json.log",
	        "Name": "/old-k8s-version-512867",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-512867:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-512867",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f",
	                "LowerDir": "/var/lib/docker/overlay2/03c393ff5bda96028dbc074a1b051ac8e6752eef90d48a1136552564119804c0-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03c393ff5bda96028dbc074a1b051ac8e6752eef90d48a1136552564119804c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03c393ff5bda96028dbc074a1b051ac8e6752eef90d48a1136552564119804c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03c393ff5bda96028dbc074a1b051ac8e6752eef90d48a1136552564119804c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-512867",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-512867/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-512867",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-512867",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-512867",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be5361ea3d3d41b3e561090cbdcf4b408ff4b8e741a71c2176b55dd824e035d9",
	            "SandboxKey": "/var/run/docker/netns/be5361ea3d3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-512867": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:88:d2:51:bc:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "41cc9a6bc794235538e736c190e047c9f1df45f342ddeffb05ec66c680fed733",
	                    "EndpointID": "cd0197c10711664f96a77237c91871be8b6aa2c979471eda89c387e85381b8c4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-512867",
	                        "f522ea5fbf77"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-512867 -n old-k8s-version-512867
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-512867 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-512867 logs -n 25: (1.187003254s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-095300 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo containerd config dump                                                                                                                                                                                                  │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo crio config                                                                                                                                                                                                             │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ delete  │ -p cilium-095300                                                                                                                                                                                                                              │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │ 29 Dec 25 07:45 UTC │
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │ 29 Dec 25 07:46 UTC │
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ delete  │ -p cert-expiration-853545                                                                                                                                                                                                                     │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ start   │ -p force-systemd-flag-122604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-122604 │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │                     │
	│ delete  │ -p force-systemd-env-002562                                                                                                                                                                                                                   │ force-systemd-env-002562  │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p cert-options-600463 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ cert-options-600463 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ -p cert-options-600463 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ delete  │ -p cert-options-600463                                                                                                                                                                                                                        │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-512867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:51:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:51:36.467354  919589 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:51:36.467494  919589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:51:36.467514  919589 out.go:374] Setting ErrFile to fd 2...
	I1229 07:51:36.467520  919589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:51:36.467921  919589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:51:36.468771  919589 out.go:368] Setting JSON to false
	I1229 07:51:36.469699  919589 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16448,"bootTime":1766978248,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:51:36.469779  919589 start.go:143] virtualization:  
	I1229 07:51:36.473629  919589 out.go:179] * [old-k8s-version-512867] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:51:36.478395  919589 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:51:36.478468  919589 notify.go:221] Checking for updates...
	I1229 07:51:36.485300  919589 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:51:36.488686  919589 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:51:36.491857  919589 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:51:36.495102  919589 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:51:36.498227  919589 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:51:36.501809  919589 config.go:182] Loaded profile config "force-systemd-flag-122604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:51:36.501945  919589 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:51:36.538307  919589 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:51:36.538447  919589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:51:36.592741  919589 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:51:36.58246332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:51:36.592896  919589 docker.go:319] overlay module found
	I1229 07:51:36.596175  919589 out.go:179] * Using the docker driver based on user configuration
	I1229 07:51:36.599165  919589 start.go:309] selected driver: docker
	I1229 07:51:36.599193  919589 start.go:928] validating driver "docker" against <nil>
	I1229 07:51:36.599209  919589 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:51:36.600016  919589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:51:36.662198  919589 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:51:36.652060309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:51:36.662350  919589 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:51:36.662583  919589 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:51:36.665609  919589 out.go:179] * Using Docker driver with root privileges
	I1229 07:51:36.668670  919589 cni.go:84] Creating CNI manager for ""
	I1229 07:51:36.668747  919589 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:51:36.668758  919589 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:51:36.668935  919589 start.go:353] cluster config:
	{Name:old-k8s-version-512867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:51:36.672123  919589 out.go:179] * Starting "old-k8s-version-512867" primary control-plane node in "old-k8s-version-512867" cluster
	I1229 07:51:36.675000  919589 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:51:36.677936  919589 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:51:36.680698  919589 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 07:51:36.680745  919589 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:51:36.680755  919589 cache.go:65] Caching tarball of preloaded images
	I1229 07:51:36.680816  919589 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:51:36.680906  919589 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:51:36.680924  919589 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1229 07:51:36.681040  919589 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/config.json ...
	I1229 07:51:36.681061  919589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/config.json: {Name:mk93d93c4eff94157b29b25e61c04f96d81af6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:36.706766  919589 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:51:36.706792  919589 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:51:36.706811  919589 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:51:36.706843  919589 start.go:360] acquireMachinesLock for old-k8s-version-512867: {Name:mke66a3bfc4d2544e7318e37a0e696241f8d3b35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:51:36.706964  919589 start.go:364] duration metric: took 99.028µs to acquireMachinesLock for "old-k8s-version-512867"
	I1229 07:51:36.706996  919589 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-512867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:51:36.707072  919589 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:51:36.710695  919589 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:51:36.710937  919589 start.go:159] libmachine.API.Create for "old-k8s-version-512867" (driver="docker")
	I1229 07:51:36.710976  919589 client.go:173] LocalClient.Create starting
	I1229 07:51:36.711066  919589 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 07:51:36.711110  919589 main.go:144] libmachine: Decoding PEM data...
	I1229 07:51:36.711129  919589 main.go:144] libmachine: Parsing certificate...
	I1229 07:51:36.711204  919589 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 07:51:36.711226  919589 main.go:144] libmachine: Decoding PEM data...
	I1229 07:51:36.711237  919589 main.go:144] libmachine: Parsing certificate...
	I1229 07:51:36.711596  919589 cli_runner.go:164] Run: docker network inspect old-k8s-version-512867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:51:36.735623  919589 cli_runner.go:211] docker network inspect old-k8s-version-512867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:51:36.735720  919589 network_create.go:284] running [docker network inspect old-k8s-version-512867] to gather additional debugging logs...
	I1229 07:51:36.735741  919589 cli_runner.go:164] Run: docker network inspect old-k8s-version-512867
	W1229 07:51:36.750780  919589 cli_runner.go:211] docker network inspect old-k8s-version-512867 returned with exit code 1
	I1229 07:51:36.750807  919589 network_create.go:287] error running [docker network inspect old-k8s-version-512867]: docker network inspect old-k8s-version-512867: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-512867 not found
	I1229 07:51:36.750822  919589 network_create.go:289] output of [docker network inspect old-k8s-version-512867]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-512867 not found
	
	** /stderr **
	I1229 07:51:36.750924  919589 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:51:36.768568  919589 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
	I1229 07:51:36.768939  919589 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce89f48ecd30 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:ee:74:81:93:1a} reservation:<nil>}
	I1229 07:51:36.769282  919589 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e2ff2b4ebfe IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:c7:24:a7:9b:5d} reservation:<nil>}
	I1229 07:51:36.769502  919589 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-714d1dc9bdce IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:13:37:2f:fc:8f} reservation:<nil>}
	I1229 07:51:36.769898  919589 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019df2f0}
	I1229 07:51:36.769914  919589 network_create.go:124] attempt to create docker network old-k8s-version-512867 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 07:51:36.769971  919589 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-512867 old-k8s-version-512867
	I1229 07:51:36.831491  919589 network_create.go:108] docker network old-k8s-version-512867 192.168.85.0/24 created
	I1229 07:51:36.831524  919589 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-512867" container
	I1229 07:51:36.831601  919589 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:51:36.846929  919589 cli_runner.go:164] Run: docker volume create old-k8s-version-512867 --label name.minikube.sigs.k8s.io=old-k8s-version-512867 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:51:36.864354  919589 oci.go:103] Successfully created a docker volume old-k8s-version-512867
	I1229 07:51:36.864451  919589 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-512867-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-512867 --entrypoint /usr/bin/test -v old-k8s-version-512867:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:51:37.417385  919589 oci.go:107] Successfully prepared a docker volume old-k8s-version-512867
	I1229 07:51:37.417460  919589 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 07:51:37.417474  919589 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:51:37.417553  919589 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-512867:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:51:42.085805  919589 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-512867:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (4.668202459s)
	I1229 07:51:42.085849  919589 kic.go:203] duration metric: took 4.668371667s to extract preloaded images to volume ...
	W1229 07:51:42.086013  919589 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:51:42.086135  919589 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:51:42.181041  919589 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-512867 --name old-k8s-version-512867 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-512867 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-512867 --network old-k8s-version-512867 --ip 192.168.85.2 --volume old-k8s-version-512867:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:51:42.517437  919589 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Running}}
	I1229 07:51:42.540593  919589 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:51:42.563893  919589 cli_runner.go:164] Run: docker exec old-k8s-version-512867 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:51:42.617644  919589 oci.go:144] the created container "old-k8s-version-512867" has a running status.
	I1229 07:51:42.617671  919589 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa...
	I1229 07:51:42.710264  919589 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:51:42.745319  919589 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:51:42.775069  919589 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:51:42.775091  919589 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-512867 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:51:42.834556  919589 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:51:42.860132  919589 machine.go:94] provisionDockerMachine start ...
	I1229 07:51:42.860223  919589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:51:42.882650  919589 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:42.883012  919589 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33813 <nil> <nil>}
	I1229 07:51:42.883027  919589 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:51:42.883707  919589 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60450->127.0.0.1:33813: read: connection reset by peer
	I1229 07:51:46.040715  919589 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-512867
	
	I1229 07:51:46.040742  919589 ubuntu.go:182] provisioning hostname "old-k8s-version-512867"
	I1229 07:51:46.040852  919589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:51:46.059750  919589 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:46.060111  919589 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33813 <nil> <nil>}
	I1229 07:51:46.060134  919589 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-512867 && echo "old-k8s-version-512867" | sudo tee /etc/hostname
	I1229 07:51:46.231301  919589 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-512867
	
	I1229 07:51:46.231429  919589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:51:46.250218  919589 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:46.250522  919589 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33813 <nil> <nil>}
	I1229 07:51:46.250543  919589 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-512867' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-512867/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-512867' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:51:46.405244  919589 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:51:46.405274  919589 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:51:46.405299  919589 ubuntu.go:190] setting up certificates
	I1229 07:51:46.405315  919589 provision.go:84] configureAuth start
	I1229 07:51:46.405392  919589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-512867
	I1229 07:51:46.422898  919589 provision.go:143] copyHostCerts
	I1229 07:51:46.422969  919589 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:51:46.422987  919589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:51:46.423067  919589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:51:46.423171  919589 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:51:46.423183  919589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:51:46.423212  919589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:51:46.423276  919589 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:51:46.423286  919589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:51:46.423311  919589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:51:46.423373  919589 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-512867 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-512867]
	I1229 07:51:46.525003  919589 provision.go:177] copyRemoteCerts
	I1229 07:51:46.525081  919589 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:51:46.525131  919589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:51:46.543028  919589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:51:46.648913  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:51:46.667753  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1229 07:51:46.685993  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:51:46.704420  919589 provision.go:87] duration metric: took 299.078248ms to configureAuth
	I1229 07:51:46.704500  919589 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:51:46.704709  919589 config.go:182] Loaded profile config "old-k8s-version-512867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:51:46.704936  919589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:51:46.722642  919589 main.go:144] libmachine: Using SSH client type: native
	I1229 07:51:46.722975  919589 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33813 <nil> <nil>}
	I1229 07:51:46.722996  919589 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:51:47.040696  919589 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:51:47.040722  919589 machine.go:97] duration metric: took 4.180570514s to provisionDockerMachine
	I1229 07:51:47.040734  919589 client.go:176] duration metric: took 10.329746762s to LocalClient.Create
	I1229 07:51:47.040749  919589 start.go:167] duration metric: took 10.329815094s to libmachine.API.Create "old-k8s-version-512867"
	I1229 07:51:47.040756  919589 start.go:293] postStartSetup for "old-k8s-version-512867" (driver="docker")
	I1229 07:51:47.040766  919589 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:51:47.040856  919589 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:51:47.040939  919589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:51:47.059815  919589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:51:47.169400  919589 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:51:47.172987  919589 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:51:47.173016  919589 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:51:47.173061  919589 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:51:47.173142  919589 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:51:47.173236  919589 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:51:47.173353  919589 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:51:47.181530  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:51:47.208570  919589 start.go:296] duration metric: took 167.798701ms for postStartSetup
	I1229 07:51:47.209040  919589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-512867
	I1229 07:51:47.230367  919589 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/config.json ...
	I1229 07:51:47.230649  919589 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:51:47.230708  919589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:51:47.252691  919589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:51:47.357940  919589 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:51:47.362779  919589 start.go:128] duration metric: took 10.655691644s to createHost
	I1229 07:51:47.362819  919589 start.go:83] releasing machines lock for "old-k8s-version-512867", held for 10.655830927s
	I1229 07:51:47.362902  919589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-512867
	I1229 07:51:47.380486  919589 ssh_runner.go:195] Run: cat /version.json
	I1229 07:51:47.380528  919589 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:51:47.380547  919589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:51:47.380582  919589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:51:47.405397  919589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:51:47.408006  919589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:51:47.599879  919589 ssh_runner.go:195] Run: systemctl --version
	I1229 07:51:47.606587  919589 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:51:47.642668  919589 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:51:47.647049  919589 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:51:47.647128  919589 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:51:47.676752  919589 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:51:47.676843  919589 start.go:496] detecting cgroup driver to use...
	I1229 07:51:47.676896  919589 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:51:47.676988  919589 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:51:47.694699  919589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:51:47.708091  919589 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:51:47.709219  919589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:51:47.727768  919589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:51:47.747214  919589 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:51:47.884343  919589 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:51:48.042006  919589 docker.go:234] disabling docker service ...
	I1229 07:51:48.042114  919589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:51:48.064529  919589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:51:48.078872  919589 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:51:48.208942  919589 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:51:48.319551  919589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:51:48.332933  919589 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:51:48.347923  919589 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1229 07:51:48.348003  919589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:51:48.356968  919589 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:51:48.357037  919589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:51:48.366354  919589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:51:48.375526  919589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:51:48.385039  919589 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:51:48.393580  919589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:51:48.403408  919589 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:51:48.418050  919589 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:51:48.427353  919589 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:51:48.435155  919589 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:51:48.442712  919589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:48.556920  919589 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:51:48.732611  919589 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:51:48.732724  919589 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:51:48.738140  919589 start.go:574] Will wait 60s for crictl version
	I1229 07:51:48.738246  919589 ssh_runner.go:195] Run: which crictl
	I1229 07:51:48.743677  919589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:51:48.774147  919589 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:51:48.774331  919589 ssh_runner.go:195] Run: crio --version
	I1229 07:51:48.804501  919589 ssh_runner.go:195] Run: crio --version
	I1229 07:51:48.837092  919589 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.35.0 ...
	I1229 07:51:48.839895  919589 cli_runner.go:164] Run: docker network inspect old-k8s-version-512867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:51:48.859114  919589 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:51:48.862996  919589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:51:48.872862  919589 kubeadm.go:884] updating cluster {Name:old-k8s-version-512867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:51:48.872997  919589 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 07:51:48.873057  919589 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:51:48.904473  919589 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:51:48.904497  919589 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:51:48.904559  919589 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:51:48.930293  919589 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:51:48.930315  919589 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:51:48.930322  919589 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1229 07:51:48.930412  919589 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-512867 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:51:48.930491  919589 ssh_runner.go:195] Run: crio config
	I1229 07:51:48.989288  919589 cni.go:84] Creating CNI manager for ""
	I1229 07:51:48.989318  919589 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:51:48.989339  919589 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:51:48.989395  919589 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-512867 NodeName:old-k8s-version-512867 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:51:48.989585  919589 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-512867"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:51:48.989707  919589 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1229 07:51:48.997517  919589 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:51:48.997639  919589 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:51:49.007074  919589 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1229 07:51:49.021382  919589 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:51:49.035518  919589 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1229 07:51:49.049643  919589 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:51:49.053866  919589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:51:49.064122  919589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:51:49.187113  919589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:51:49.203852  919589 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867 for IP: 192.168.85.2
	I1229 07:51:49.203926  919589 certs.go:195] generating shared ca certs ...
	I1229 07:51:49.203979  919589 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:49.204187  919589 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:51:49.204275  919589 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:51:49.204302  919589 certs.go:257] generating profile certs ...
	I1229 07:51:49.204411  919589 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.key
	I1229 07:51:49.204443  919589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt with IP's: []
	I1229 07:51:49.529811  919589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt ...
	I1229 07:51:49.529846  919589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: {Name:mk526f047394e0b0e81fee34f840f479993e0fc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:49.530073  919589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.key ...
	I1229 07:51:49.530093  919589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.key: {Name:mkd287f47d88ff745ca920681743df1cf8be35e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:49.530198  919589 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.key.14700b57
	I1229 07:51:49.530217  919589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.crt.14700b57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1229 07:51:49.861746  919589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.crt.14700b57 ...
	I1229 07:51:49.861785  919589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.crt.14700b57: {Name:mkd66aebae29cb0ba5d655853d09700a2a113510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:49.861988  919589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.key.14700b57 ...
	I1229 07:51:49.862006  919589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.key.14700b57: {Name:mk9b44451b765bf8ca7e1a52b4ac73187c69d8a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:49.862095  919589 certs.go:382] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.crt.14700b57 -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.crt
	I1229 07:51:49.862173  919589 certs.go:386] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.key.14700b57 -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.key
	I1229 07:51:49.862234  919589 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.key
	I1229 07:51:49.862251  919589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.crt with IP's: []
	I1229 07:51:50.595019  919589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.crt ...
	I1229 07:51:50.595054  919589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.crt: {Name:mk9a77355e7caea5fe8eba482948bfd8dc1295e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:50.595244  919589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.key ...
	I1229 07:51:50.595259  919589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.key: {Name:mk81f780fe491a61b27057bd31a46b696580602d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:51:50.595461  919589 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:51:50.595513  919589 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:51:50.595527  919589 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:51:50.595564  919589 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:51:50.595593  919589 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:51:50.595622  919589 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:51:50.595671  919589 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:51:50.596234  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:51:50.615189  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:51:50.633624  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:51:50.651700  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:51:50.670725  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1229 07:51:50.688987  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:51:50.713394  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:51:50.734307  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:51:50.754588  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:51:50.774356  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:51:50.792261  919589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:51:50.810558  919589 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:51:50.824056  919589 ssh_runner.go:195] Run: openssl version
	I1229 07:51:50.830646  919589 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:51:50.838473  919589 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:51:50.846232  919589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:51:50.850429  919589 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:51:50.850564  919589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:51:50.891727  919589 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:51:50.899153  919589 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7418542.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:51:50.906700  919589 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.914422  919589 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:51:50.922043  919589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.926084  919589 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.926197  919589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:51:50.968708  919589 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:51:50.976439  919589 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:51:50.983851  919589 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:51:50.991777  919589 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:51:50.999992  919589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:51:51.004965  919589 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:51:51.005132  919589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:51:51.052153  919589 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:51:51.060018  919589 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/741854.pem /etc/ssl/certs/51391683.0
	I1229 07:51:51.067690  919589 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:51:51.071453  919589 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:51:51.071512  919589 kubeadm.go:401] StartCluster: {Name:old-k8s-version-512867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:51:51.071598  919589 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:51:51.071657  919589 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:51:51.100023  919589 cri.go:96] found id: ""
	I1229 07:51:51.100095  919589 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:51:51.107974  919589 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:51:51.117354  919589 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:51:51.117425  919589 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:51:51.125506  919589 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:51:51.125587  919589 kubeadm.go:158] found existing configuration files:
	
	I1229 07:51:51.125658  919589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:51:51.133741  919589 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:51:51.133809  919589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:51:51.141588  919589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:51:51.149770  919589 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:51:51.149854  919589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:51:51.157446  919589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:51:51.165317  919589 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:51:51.165441  919589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:51:51.173216  919589 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:51:51.181352  919589 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:51:51.181475  919589 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:51:51.189498  919589 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:51:51.278076  919589 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:51:51.356313  919589 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:52:07.977027  919589 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1229 07:52:07.977116  919589 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:52:07.977204  919589 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:52:07.977259  919589 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:52:07.977293  919589 kubeadm.go:319] OS: Linux
	I1229 07:52:07.977338  919589 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:52:07.977386  919589 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:52:07.977432  919589 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:52:07.977480  919589 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:52:07.977527  919589 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:52:07.977574  919589 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:52:07.977619  919589 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:52:07.977669  919589 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:52:07.977716  919589 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:52:07.977788  919589 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:52:07.977881  919589 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:52:07.977973  919589 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1229 07:52:07.978035  919589 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:52:07.980288  919589 out.go:252]   - Generating certificates and keys ...
	I1229 07:52:07.980438  919589 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:52:07.980544  919589 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:52:07.980672  919589 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:52:07.980775  919589 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:52:07.980862  919589 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:52:07.980916  919589 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:52:07.980974  919589 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:52:07.981108  919589 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-512867] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:52:07.981164  919589 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:52:07.981297  919589 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-512867] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:52:07.981366  919589 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:52:07.981433  919589 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:52:07.981480  919589 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:52:07.981539  919589 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:52:07.981593  919589 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:52:07.981649  919589 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:52:07.981718  919589 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:52:07.981776  919589 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:52:07.981863  919589 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:52:07.981935  919589 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:52:07.984883  919589 out.go:252]   - Booting up control plane ...
	I1229 07:52:07.984989  919589 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:52:07.985072  919589 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:52:07.985142  919589 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:52:07.985253  919589 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:52:07.985344  919589 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:52:07.985386  919589 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:52:07.985550  919589 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1229 07:52:07.985631  919589 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.503177 seconds
	I1229 07:52:07.985744  919589 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:52:07.985877  919589 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:52:07.985944  919589 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:52:07.986146  919589 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-512867 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:52:07.986206  919589 kubeadm.go:319] [bootstrap-token] Using token: 2tl5ne.yoqq576ijtaso725
	I1229 07:52:07.989011  919589 out.go:252]   - Configuring RBAC rules ...
	I1229 07:52:07.989192  919589 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:52:07.989327  919589 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:52:07.989482  919589 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:52:07.989616  919589 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:52:07.989735  919589 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:52:07.989837  919589 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:52:07.989955  919589 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:52:07.990002  919589 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:52:07.990051  919589 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:52:07.990059  919589 kubeadm.go:319] 
	I1229 07:52:07.990119  919589 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:52:07.990127  919589 kubeadm.go:319] 
	I1229 07:52:07.990204  919589 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:52:07.990213  919589 kubeadm.go:319] 
	I1229 07:52:07.990238  919589 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:52:07.990300  919589 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:52:07.990353  919589 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:52:07.990361  919589 kubeadm.go:319] 
	I1229 07:52:07.990422  919589 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:52:07.990431  919589 kubeadm.go:319] 
	I1229 07:52:07.990478  919589 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:52:07.990486  919589 kubeadm.go:319] 
	I1229 07:52:07.990539  919589 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:52:07.990626  919589 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:52:07.990701  919589 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:52:07.990710  919589 kubeadm.go:319] 
	I1229 07:52:07.990796  919589 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:52:07.990878  919589 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:52:07.990884  919589 kubeadm.go:319] 
	I1229 07:52:07.990968  919589 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2tl5ne.yoqq576ijtaso725 \
	I1229 07:52:07.991073  919589 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 \
	I1229 07:52:07.991099  919589 kubeadm.go:319] 	--control-plane 
	I1229 07:52:07.991106  919589 kubeadm.go:319] 
	I1229 07:52:07.991191  919589 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:52:07.991199  919589 kubeadm.go:319] 
	I1229 07:52:07.991281  919589 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2tl5ne.yoqq576ijtaso725 \
	I1229 07:52:07.991399  919589 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 
	I1229 07:52:07.991412  919589 cni.go:84] Creating CNI manager for ""
	I1229 07:52:07.991419  919589 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:52:07.994672  919589 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:52:07.997670  919589 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:52:08.002545  919589 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1229 07:52:08.002579  919589 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:52:08.022752  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:52:09.055749  919589 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.03290827s)
	I1229 07:52:09.055786  919589 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:52:09.055880  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:09.055909  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-512867 minikube.k8s.io/updated_at=2025_12_29T07_52_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=old-k8s-version-512867 minikube.k8s.io/primary=true
	I1229 07:52:09.068586  919589 ops.go:34] apiserver oom_adj: -16
	I1229 07:52:09.241972  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:09.742979  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:10.242248  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:10.742446  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:11.242012  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:11.742390  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:12.243068  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:12.742070  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:13.242094  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:13.742145  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:14.242074  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:14.742107  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:15.242891  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:15.742813  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:16.241986  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:16.742084  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:17.242544  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:17.742836  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:18.242536  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:18.742564  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:19.242294  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:19.742134  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:20.242701  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:20.742879  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:21.242802  919589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:52:21.482899  919589 kubeadm.go:1114] duration metric: took 12.427082225s to wait for elevateKubeSystemPrivileges
	I1229 07:52:21.482955  919589 kubeadm.go:403] duration metric: took 30.411435363s to StartCluster
	I1229 07:52:21.482978  919589 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:52:21.483051  919589 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:52:21.483849  919589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:52:21.484092  919589 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:52:21.484248  919589 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:52:21.484532  919589 config.go:182] Loaded profile config "old-k8s-version-512867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:52:21.484512  919589 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:52:21.484643  919589 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-512867"
	I1229 07:52:21.484665  919589 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-512867"
	I1229 07:52:21.484694  919589 host.go:66] Checking if "old-k8s-version-512867" exists ...
	I1229 07:52:21.484699  919589 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-512867"
	I1229 07:52:21.484779  919589 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-512867"
	I1229 07:52:21.485207  919589 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:52:21.485334  919589 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:52:21.487166  919589 out.go:179] * Verifying Kubernetes components...
	I1229 07:52:21.490399  919589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:52:21.526303  919589 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:52:21.530049  919589 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:52:21.530075  919589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:52:21.530145  919589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:52:21.536417  919589 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-512867"
	I1229 07:52:21.536481  919589 host.go:66] Checking if "old-k8s-version-512867" exists ...
	I1229 07:52:21.539607  919589 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:52:21.581429  919589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:52:21.592644  919589 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:52:21.592672  919589 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:52:21.592743  919589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:52:21.627902  919589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33813 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:52:21.871072  919589 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:52:21.959655  919589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:52:21.988189  919589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:52:22.168531  919589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:52:22.824556  919589 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1229 07:52:22.826369  919589 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-512867" to be "Ready" ...
	I1229 07:52:23.329352  919589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.341062377s)
	I1229 07:52:23.329400  919589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.160781358s)
	I1229 07:52:23.331776  919589 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-512867" context rescaled to 1 replicas
	I1229 07:52:23.341042  919589 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:52:23.344137  919589 addons.go:530] duration metric: took 1.859617519s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1229 07:52:24.829941  919589 node_ready.go:57] node "old-k8s-version-512867" has "Ready":"False" status (will retry)
	W1229 07:52:26.830358  919589 node_ready.go:57] node "old-k8s-version-512867" has "Ready":"False" status (will retry)
	W1229 07:52:29.330073  919589 node_ready.go:57] node "old-k8s-version-512867" has "Ready":"False" status (will retry)
	W1229 07:52:31.330974  919589 node_ready.go:57] node "old-k8s-version-512867" has "Ready":"False" status (will retry)
	W1229 07:52:33.829912  919589 node_ready.go:57] node "old-k8s-version-512867" has "Ready":"False" status (will retry)
	I1229 07:52:35.333292  919589 node_ready.go:49] node "old-k8s-version-512867" is "Ready"
	I1229 07:52:35.333325  919589 node_ready.go:38] duration metric: took 12.506341481s for node "old-k8s-version-512867" to be "Ready" ...
	I1229 07:52:35.333339  919589 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:52:35.333398  919589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:52:35.347854  919589 api_server.go:72] duration metric: took 13.863731345s to wait for apiserver process to appear ...
	I1229 07:52:35.347878  919589 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:52:35.347898  919589 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:52:35.357388  919589 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1229 07:52:35.358803  919589 api_server.go:141] control plane version: v1.28.0
	I1229 07:52:35.358827  919589 api_server.go:131] duration metric: took 10.942261ms to wait for apiserver health ...
	I1229 07:52:35.358836  919589 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:52:35.362531  919589 system_pods.go:59] 8 kube-system pods found
	I1229 07:52:35.362567  919589 system_pods.go:61] "coredns-5dd5756b68-wr2cl" [38a5da82-9cb7-4f2a-8258-393b1cb27c8c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:52:35.362576  919589 system_pods.go:61] "etcd-old-k8s-version-512867" [174e6131-3bb9-4c27-b831-a7dec9f07c17] Running
	I1229 07:52:35.362585  919589 system_pods.go:61] "kindnet-rwfkz" [3700dc6c-45b9-4abc-a322-f283999cca24] Running
	I1229 07:52:35.362590  919589 system_pods.go:61] "kube-apiserver-old-k8s-version-512867" [2ff4594c-21b0-4a6b-be02-cadbcc4332ca] Running
	I1229 07:52:35.362595  919589 system_pods.go:61] "kube-controller-manager-old-k8s-version-512867" [ac43a8d0-cf03-43b8-9d59-5d1bc7938de5] Running
	I1229 07:52:35.362601  919589 system_pods.go:61] "kube-proxy-gdm8d" [da0bed78-1180-482b-89a3-cccbfe791640] Running
	I1229 07:52:35.362606  919589 system_pods.go:61] "kube-scheduler-old-k8s-version-512867" [595f8c3c-4177-40a3-bd88-b0794f8ac76f] Running
	I1229 07:52:35.362612  919589 system_pods.go:61] "storage-provisioner" [05d3f69a-6463-4fc2-b3f8-fb6ed350c50f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:52:35.362622  919589 system_pods.go:74] duration metric: took 3.779958ms to wait for pod list to return data ...
	I1229 07:52:35.362630  919589 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:52:35.365024  919589 default_sa.go:45] found service account: "default"
	I1229 07:52:35.365048  919589 default_sa.go:55] duration metric: took 2.412518ms for default service account to be created ...
	I1229 07:52:35.365057  919589 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:52:35.368388  919589 system_pods.go:86] 8 kube-system pods found
	I1229 07:52:35.368422  919589 system_pods.go:89] "coredns-5dd5756b68-wr2cl" [38a5da82-9cb7-4f2a-8258-393b1cb27c8c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:52:35.368429  919589 system_pods.go:89] "etcd-old-k8s-version-512867" [174e6131-3bb9-4c27-b831-a7dec9f07c17] Running
	I1229 07:52:35.368436  919589 system_pods.go:89] "kindnet-rwfkz" [3700dc6c-45b9-4abc-a322-f283999cca24] Running
	I1229 07:52:35.368467  919589 system_pods.go:89] "kube-apiserver-old-k8s-version-512867" [2ff4594c-21b0-4a6b-be02-cadbcc4332ca] Running
	I1229 07:52:35.368480  919589 system_pods.go:89] "kube-controller-manager-old-k8s-version-512867" [ac43a8d0-cf03-43b8-9d59-5d1bc7938de5] Running
	I1229 07:52:35.368485  919589 system_pods.go:89] "kube-proxy-gdm8d" [da0bed78-1180-482b-89a3-cccbfe791640] Running
	I1229 07:52:35.368489  919589 system_pods.go:89] "kube-scheduler-old-k8s-version-512867" [595f8c3c-4177-40a3-bd88-b0794f8ac76f] Running
	I1229 07:52:35.368505  919589 system_pods.go:89] "storage-provisioner" [05d3f69a-6463-4fc2-b3f8-fb6ed350c50f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:52:35.368557  919589 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1229 07:52:35.670241  919589 system_pods.go:86] 8 kube-system pods found
	I1229 07:52:35.670277  919589 system_pods.go:89] "coredns-5dd5756b68-wr2cl" [38a5da82-9cb7-4f2a-8258-393b1cb27c8c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:52:35.670285  919589 system_pods.go:89] "etcd-old-k8s-version-512867" [174e6131-3bb9-4c27-b831-a7dec9f07c17] Running
	I1229 07:52:35.670320  919589 system_pods.go:89] "kindnet-rwfkz" [3700dc6c-45b9-4abc-a322-f283999cca24] Running
	I1229 07:52:35.670335  919589 system_pods.go:89] "kube-apiserver-old-k8s-version-512867" [2ff4594c-21b0-4a6b-be02-cadbcc4332ca] Running
	I1229 07:52:35.670341  919589 system_pods.go:89] "kube-controller-manager-old-k8s-version-512867" [ac43a8d0-cf03-43b8-9d59-5d1bc7938de5] Running
	I1229 07:52:35.670346  919589 system_pods.go:89] "kube-proxy-gdm8d" [da0bed78-1180-482b-89a3-cccbfe791640] Running
	I1229 07:52:35.670351  919589 system_pods.go:89] "kube-scheduler-old-k8s-version-512867" [595f8c3c-4177-40a3-bd88-b0794f8ac76f] Running
	I1229 07:52:35.670357  919589 system_pods.go:89] "storage-provisioner" [05d3f69a-6463-4fc2-b3f8-fb6ed350c50f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:52:36.013099  919589 system_pods.go:86] 8 kube-system pods found
	I1229 07:52:36.013139  919589 system_pods.go:89] "coredns-5dd5756b68-wr2cl" [38a5da82-9cb7-4f2a-8258-393b1cb27c8c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:52:36.013147  919589 system_pods.go:89] "etcd-old-k8s-version-512867" [174e6131-3bb9-4c27-b831-a7dec9f07c17] Running
	I1229 07:52:36.013155  919589 system_pods.go:89] "kindnet-rwfkz" [3700dc6c-45b9-4abc-a322-f283999cca24] Running
	I1229 07:52:36.013193  919589 system_pods.go:89] "kube-apiserver-old-k8s-version-512867" [2ff4594c-21b0-4a6b-be02-cadbcc4332ca] Running
	I1229 07:52:36.013206  919589 system_pods.go:89] "kube-controller-manager-old-k8s-version-512867" [ac43a8d0-cf03-43b8-9d59-5d1bc7938de5] Running
	I1229 07:52:36.013211  919589 system_pods.go:89] "kube-proxy-gdm8d" [da0bed78-1180-482b-89a3-cccbfe791640] Running
	I1229 07:52:36.013216  919589 system_pods.go:89] "kube-scheduler-old-k8s-version-512867" [595f8c3c-4177-40a3-bd88-b0794f8ac76f] Running
	I1229 07:52:36.013223  919589 system_pods.go:89] "storage-provisioner" [05d3f69a-6463-4fc2-b3f8-fb6ed350c50f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:52:36.346579  919589 system_pods.go:86] 8 kube-system pods found
	I1229 07:52:36.346617  919589 system_pods.go:89] "coredns-5dd5756b68-wr2cl" [38a5da82-9cb7-4f2a-8258-393b1cb27c8c] Running
	I1229 07:52:36.346625  919589 system_pods.go:89] "etcd-old-k8s-version-512867" [174e6131-3bb9-4c27-b831-a7dec9f07c17] Running
	I1229 07:52:36.346630  919589 system_pods.go:89] "kindnet-rwfkz" [3700dc6c-45b9-4abc-a322-f283999cca24] Running
	I1229 07:52:36.346634  919589 system_pods.go:89] "kube-apiserver-old-k8s-version-512867" [2ff4594c-21b0-4a6b-be02-cadbcc4332ca] Running
	I1229 07:52:36.346640  919589 system_pods.go:89] "kube-controller-manager-old-k8s-version-512867" [ac43a8d0-cf03-43b8-9d59-5d1bc7938de5] Running
	I1229 07:52:36.346645  919589 system_pods.go:89] "kube-proxy-gdm8d" [da0bed78-1180-482b-89a3-cccbfe791640] Running
	I1229 07:52:36.346649  919589 system_pods.go:89] "kube-scheduler-old-k8s-version-512867" [595f8c3c-4177-40a3-bd88-b0794f8ac76f] Running
	I1229 07:52:36.346691  919589 system_pods.go:89] "storage-provisioner" [05d3f69a-6463-4fc2-b3f8-fb6ed350c50f] Running
	I1229 07:52:36.346707  919589 system_pods.go:126] duration metric: took 981.64386ms to wait for k8s-apps to be running ...
	I1229 07:52:36.346716  919589 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:52:36.346784  919589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:52:36.360515  919589 system_svc.go:56] duration metric: took 13.78834ms WaitForService to wait for kubelet
	I1229 07:52:36.360547  919589 kubeadm.go:587] duration metric: took 14.876429153s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:52:36.360567  919589 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:52:36.363416  919589 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:52:36.363453  919589 node_conditions.go:123] node cpu capacity is 2
	I1229 07:52:36.363466  919589 node_conditions.go:105] duration metric: took 2.894342ms to run NodePressure ...
	I1229 07:52:36.363478  919589 start.go:242] waiting for startup goroutines ...
	I1229 07:52:36.363485  919589 start.go:247] waiting for cluster config update ...
	I1229 07:52:36.363497  919589 start.go:256] writing updated cluster config ...
	I1229 07:52:36.363798  919589 ssh_runner.go:195] Run: rm -f paused
	I1229 07:52:36.367485  919589 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:52:36.371842  919589 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-wr2cl" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:52:36.377202  919589 pod_ready.go:94] pod "coredns-5dd5756b68-wr2cl" is "Ready"
	I1229 07:52:36.377230  919589 pod_ready.go:86] duration metric: took 5.360488ms for pod "coredns-5dd5756b68-wr2cl" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:52:36.380132  919589 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:52:36.386696  919589 pod_ready.go:94] pod "etcd-old-k8s-version-512867" is "Ready"
	I1229 07:52:36.386774  919589 pod_ready.go:86] duration metric: took 6.574672ms for pod "etcd-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:52:36.390033  919589 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:52:36.395075  919589 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-512867" is "Ready"
	I1229 07:52:36.395104  919589 pod_ready.go:86] duration metric: took 5.044219ms for pod "kube-apiserver-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:52:36.398314  919589 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:52:36.771754  919589 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-512867" is "Ready"
	I1229 07:52:36.771781  919589 pod_ready.go:86] duration metric: took 373.441601ms for pod "kube-controller-manager-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:52:36.972967  919589 pod_ready.go:83] waiting for pod "kube-proxy-gdm8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:52:37.371247  919589 pod_ready.go:94] pod "kube-proxy-gdm8d" is "Ready"
	I1229 07:52:37.371320  919589 pod_ready.go:86] duration metric: took 398.320045ms for pod "kube-proxy-gdm8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:52:37.572003  919589 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:52:37.972237  919589 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-512867" is "Ready"
	I1229 07:52:37.972265  919589 pod_ready.go:86] duration metric: took 400.232416ms for pod "kube-scheduler-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:52:37.972278  919589 pod_ready.go:40] duration metric: took 1.604749356s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:52:38.026760  919589 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1229 07:52:38.030665  919589 out.go:203] 
	W1229 07:52:38.033626  919589 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1229 07:52:38.036726  919589 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:52:38.040630  919589 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-512867" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 29 07:52:35 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:35.70538156Z" level=info msg="Created container 58ae47f1351631a3fbcce69a712062aecb93b91c339dc4734a0adc8bdb092c3c: kube-system/coredns-5dd5756b68-wr2cl/coredns" id=7f51c08c-d3c0-41e5-abde-113533f841fd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:52:35 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:35.706336387Z" level=info msg="Starting container: 58ae47f1351631a3fbcce69a712062aecb93b91c339dc4734a0adc8bdb092c3c" id=e1cb8e88-7dfd-4f6d-852e-6e910debf241 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:52:35 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:35.708817516Z" level=info msg="Started container" PID=1958 containerID=58ae47f1351631a3fbcce69a712062aecb93b91c339dc4734a0adc8bdb092c3c description=kube-system/coredns-5dd5756b68-wr2cl/coredns id=e1cb8e88-7dfd-4f6d-852e-6e910debf241 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b8b054bdfc20a3f0958af92aedf992b1a9b6b2aa14c078b649ac03c9696d1379
	Dec 29 07:52:38 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:38.56887193Z" level=info msg="Running pod sandbox: default/busybox/POD" id=bc38d556-9390-4116-97ac-8cfa7181d46e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:52:38 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:38.568947409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:52:38 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:38.577704067Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2bde5fc3877de8a5f623f08893eaf3986638053e5d7d723820d1331bef2c301a UID:76520809-8782-4067-b741-af44649b8703 NetNS:/var/run/netns/56ae50dd-43e7-46f8-8dca-5affb557596e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40006ab530}] Aliases:map[]}"
	Dec 29 07:52:38 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:38.577762923Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 29 07:52:38 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:38.591681511Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2bde5fc3877de8a5f623f08893eaf3986638053e5d7d723820d1331bef2c301a UID:76520809-8782-4067-b741-af44649b8703 NetNS:/var/run/netns/56ae50dd-43e7-46f8-8dca-5affb557596e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40006ab530}] Aliases:map[]}"
	Dec 29 07:52:38 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:38.591845294Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 29 07:52:38 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:38.594710598Z" level=info msg="Ran pod sandbox 2bde5fc3877de8a5f623f08893eaf3986638053e5d7d723820d1331bef2c301a with infra container: default/busybox/POD" id=bc38d556-9390-4116-97ac-8cfa7181d46e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:52:38 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:38.597768616Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6a6678ef-292e-4250-b63c-395dcb1dac36 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:52:38 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:38.598102468Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6a6678ef-292e-4250-b63c-395dcb1dac36 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:52:38 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:38.598298916Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6a6678ef-292e-4250-b63c-395dcb1dac36 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:52:38 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:38.599262064Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1adba3cf-ad17-4f69-ba58-284bb5573b45 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:52:38 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:38.599657857Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 29 07:52:40 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:40.685970859Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=1adba3cf-ad17-4f69-ba58-284bb5573b45 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:52:40 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:40.68938257Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=377a9b46-5450-417b-87b8-868a2105b159 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:52:40 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:40.69179206Z" level=info msg="Creating container: default/busybox/busybox" id=7dd4534e-b8bc-433a-ab64-45355437e79b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:52:40 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:40.691932286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:52:40 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:40.6967309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:52:40 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:40.697415187Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:52:40 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:40.715497203Z" level=info msg="Created container 63d63cc5fee745f2acab754c3fcd41b529e7c7e4422d53b9c08e509ddc61a96f: default/busybox/busybox" id=7dd4534e-b8bc-433a-ab64-45355437e79b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:52:40 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:40.716770284Z" level=info msg="Starting container: 63d63cc5fee745f2acab754c3fcd41b529e7c7e4422d53b9c08e509ddc61a96f" id=19836532-bc74-4d29-a033-5abcb8e87439 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:52:40 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:40.718591094Z" level=info msg="Started container" PID=2019 containerID=63d63cc5fee745f2acab754c3fcd41b529e7c7e4422d53b9c08e509ddc61a96f description=default/busybox/busybox id=19836532-bc74-4d29-a033-5abcb8e87439 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bde5fc3877de8a5f623f08893eaf3986638053e5d7d723820d1331bef2c301a
	Dec 29 07:52:47 old-k8s-version-512867 crio[836]: time="2025-12-29T07:52:47.474279155Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	63d63cc5fee74       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   2bde5fc3877de       busybox                                          default
	58ae47f135163       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   b8b054bdfc20a       coredns-5dd5756b68-wr2cl                         kube-system
	bba7f0c152c9a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   bb024fa68fc9a       storage-provisioner                              kube-system
	90436d223d184       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   60a6d47661d83       kindnet-rwfkz                                    kube-system
	3df0d1a17b6b1       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   d880c2b044bb9       kube-proxy-gdm8d                                 kube-system
	3827292cec572       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   7ee0ac1bb3dca       kube-controller-manager-old-k8s-version-512867   kube-system
	451f87c85c6a5       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   89c3bc18c7ec5       kube-scheduler-old-k8s-version-512867            kube-system
	826d2dbc98698       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   8501e6df92935       etcd-old-k8s-version-512867                      kube-system
	6dea4972c1a35       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   5ca8676653aa5       kube-apiserver-old-k8s-version-512867            kube-system
	
	
	==> coredns [58ae47f1351631a3fbcce69a712062aecb93b91c339dc4734a0adc8bdb092c3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60040 - 64635 "HINFO IN 5071766514929940104.1743505814824597484. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011970911s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-512867
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-512867
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=old-k8s-version-512867
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_52_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:52:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-512867
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:52:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:52:38 +0000   Mon, 29 Dec 2025 07:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:52:38 +0000   Mon, 29 Dec 2025 07:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:52:38 +0000   Mon, 29 Dec 2025 07:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:52:38 +0000   Mon, 29 Dec 2025 07:52:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-512867
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                37780f38-62fe-49f0-844c-77007ce7ace0
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-wr2cl                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-512867                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         44s
	  kube-system                 kindnet-rwfkz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-512867             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-512867    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-gdm8d                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-512867             100m (5%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node old-k8s-version-512867 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-512867 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-512867 event: Registered Node old-k8s-version-512867 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-512867 status is now: NodeReady
	
	
	==> dmesg <==
	[  +3.471993] overlayfs: idmapped layers are currently not supported
	[Dec29 07:24] overlayfs: idmapped layers are currently not supported
	[Dec29 07:25] overlayfs: idmapped layers are currently not supported
	[Dec29 07:26] overlayfs: idmapped layers are currently not supported
	[Dec29 07:29] overlayfs: idmapped layers are currently not supported
	[Dec29 07:30] overlayfs: idmapped layers are currently not supported
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [826d2dbc986982a65c513d2e5221fb2c766415d82f11dac300066cbafe76e129] <==
	{"level":"info","ts":"2025-12-29T07:52:00.634951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-29T07:52:00.635119Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-29T07:52:00.636851Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-29T07:52:00.636987Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:52:00.648828Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:52:00.64908Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:52:00.649141Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:52:01.076823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-29T07:52:01.076936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-29T07:52:01.076989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-29T07:52:01.077031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:52:01.077069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:52:01.077108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-29T07:52:01.077142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:52:01.080919Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:52:01.083092Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-512867 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:52:01.084844Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:52:01.084913Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:52:01.084984Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:52:01.084983Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:52:01.085118Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:52:01.084888Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:52:01.084856Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:52:01.086311Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-29T07:52:01.098396Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 07:52:49 up  4:35,  0 user,  load average: 1.72, 1.91, 2.37
	Linux old-k8s-version-512867 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [90436d223d184b8fbbb34e4b828e567bf4714f82e74a58dcc19c3420bc778e72] <==
	I1229 07:52:24.763964       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:52:24.853253       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:52:24.853435       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:52:24.853453       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:52:24.853484       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:52:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:52:25.054182       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:52:25.054279       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:52:25.054333       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:52:25.055529       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:52:25.354725       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:52:25.354828       1 metrics.go:72] Registering metrics
	I1229 07:52:25.354918       1 controller.go:711] "Syncing nftables rules"
	I1229 07:52:35.061008       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:52:35.061069       1 main.go:301] handling current node
	I1229 07:52:45.059597       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:52:45.059636       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6dea4972c1a35d2215fd1b914eddd5bfe9f25b8a30ce59cf7d09232e161473cc] <==
	I1229 07:52:04.955346       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1229 07:52:04.955578       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:52:04.955854       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1229 07:52:04.957558       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1229 07:52:04.957702       1 aggregator.go:166] initial CRD sync complete...
	I1229 07:52:04.957755       1 autoregister_controller.go:141] Starting autoregister controller
	I1229 07:52:04.957784       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:52:04.957827       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:52:04.985122       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1229 07:52:04.987970       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:52:05.637206       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1229 07:52:05.641957       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1229 07:52:05.641983       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1229 07:52:06.265466       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:52:06.315446       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:52:06.379984       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1229 07:52:06.394141       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1229 07:52:06.395313       1 controller.go:624] quota admission added evaluator for: endpoints
	I1229 07:52:06.400579       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:52:06.931126       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1229 07:52:07.844699       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1229 07:52:07.859913       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1229 07:52:07.871026       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1229 07:52:20.545731       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1229 07:52:20.894681       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [3827292cec572452ff6e9fc7d138c30e898b05e93f268f61181e7ec24a13af19] <==
	I1229 07:52:20.524503       1 shared_informer.go:318] Caches are synced for resource quota
	I1229 07:52:20.546276       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1229 07:52:20.552004       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1229 07:52:20.568914       1 shared_informer.go:318] Caches are synced for resource quota
	I1229 07:52:20.912688       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gdm8d"
	I1229 07:52:20.920133       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rwfkz"
	I1229 07:52:20.936885       1 shared_informer.go:318] Caches are synced for garbage collector
	I1229 07:52:20.936939       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1229 07:52:20.990736       1 shared_informer.go:318] Caches are synced for garbage collector
	I1229 07:52:21.413333       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jvwdr"
	I1229 07:52:21.442459       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wr2cl"
	I1229 07:52:21.459343       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="906.967375ms"
	I1229 07:52:21.473273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.86019ms"
	I1229 07:52:21.473368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.327µs"
	I1229 07:52:22.906027       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1229 07:52:22.936284       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-jvwdr"
	I1229 07:52:22.954645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.838496ms"
	I1229 07:52:22.981437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.744983ms"
	I1229 07:52:22.981513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.728µs"
	I1229 07:52:35.306500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.99µs"
	I1229 07:52:35.328580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.615µs"
	I1229 07:52:35.395158       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1229 07:52:36.178929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.906µs"
	I1229 07:52:36.220037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.357683ms"
	I1229 07:52:36.221390       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.073µs"
	
	
	==> kube-proxy [3df0d1a17b6b1b6e0e94af8c54be82b23837656c895f61ecbedd3d24d46d2e5f] <==
	I1229 07:52:21.429951       1 server_others.go:69] "Using iptables proxy"
	I1229 07:52:21.477558       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1229 07:52:21.683378       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:52:21.709063       1 server_others.go:152] "Using iptables Proxier"
	I1229 07:52:21.709104       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1229 07:52:21.709112       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1229 07:52:21.709142       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1229 07:52:21.709355       1 server.go:846] "Version info" version="v1.28.0"
	I1229 07:52:21.709365       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:52:21.710551       1 config.go:188] "Starting service config controller"
	I1229 07:52:21.710561       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1229 07:52:21.710579       1 config.go:97] "Starting endpoint slice config controller"
	I1229 07:52:21.710583       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1229 07:52:21.711469       1 config.go:315] "Starting node config controller"
	I1229 07:52:21.711483       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1229 07:52:21.811673       1 shared_informer.go:318] Caches are synced for node config
	I1229 07:52:21.811713       1 shared_informer.go:318] Caches are synced for service config
	I1229 07:52:21.811750       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [451f87c85c6a53e23f1cbea4dd359acd54b945648c3b3e9623e1059b9797a38d] <==
	W1229 07:52:05.327503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1229 07:52:05.327518       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1229 07:52:05.327577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1229 07:52:05.327593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1229 07:52:05.327647       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1229 07:52:05.327661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1229 07:52:05.327702       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1229 07:52:05.327717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1229 07:52:05.327758       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1229 07:52:05.327772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1229 07:52:05.327820       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1229 07:52:05.327836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1229 07:52:05.327901       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1229 07:52:05.327917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1229 07:52:05.327976       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1229 07:52:05.327995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1229 07:52:05.328045       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1229 07:52:05.328061       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1229 07:52:05.328110       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1229 07:52:05.328123       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1229 07:52:05.328176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1229 07:52:05.328190       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1229 07:52:06.200349       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1229 07:52:06.201075       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1229 07:52:08.814472       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 29 07:52:20 old-k8s-version-512867 kubelet[1380]: I1229 07:52:20.440914    1380 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 29 07:52:20 old-k8s-version-512867 kubelet[1380]: I1229 07:52:20.925597    1380 topology_manager.go:215] "Topology Admit Handler" podUID="da0bed78-1180-482b-89a3-cccbfe791640" podNamespace="kube-system" podName="kube-proxy-gdm8d"
	Dec 29 07:52:20 old-k8s-version-512867 kubelet[1380]: I1229 07:52:20.932189    1380 topology_manager.go:215] "Topology Admit Handler" podUID="3700dc6c-45b9-4abc-a322-f283999cca24" podNamespace="kube-system" podName="kindnet-rwfkz"
	Dec 29 07:52:21 old-k8s-version-512867 kubelet[1380]: I1229 07:52:21.016544    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da0bed78-1180-482b-89a3-cccbfe791640-kube-proxy\") pod \"kube-proxy-gdm8d\" (UID: \"da0bed78-1180-482b-89a3-cccbfe791640\") " pod="kube-system/kube-proxy-gdm8d"
	Dec 29 07:52:21 old-k8s-version-512867 kubelet[1380]: I1229 07:52:21.016772    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3700dc6c-45b9-4abc-a322-f283999cca24-xtables-lock\") pod \"kindnet-rwfkz\" (UID: \"3700dc6c-45b9-4abc-a322-f283999cca24\") " pod="kube-system/kindnet-rwfkz"
	Dec 29 07:52:21 old-k8s-version-512867 kubelet[1380]: I1229 07:52:21.016914    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da0bed78-1180-482b-89a3-cccbfe791640-xtables-lock\") pod \"kube-proxy-gdm8d\" (UID: \"da0bed78-1180-482b-89a3-cccbfe791640\") " pod="kube-system/kube-proxy-gdm8d"
	Dec 29 07:52:21 old-k8s-version-512867 kubelet[1380]: I1229 07:52:21.017003    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da0bed78-1180-482b-89a3-cccbfe791640-lib-modules\") pod \"kube-proxy-gdm8d\" (UID: \"da0bed78-1180-482b-89a3-cccbfe791640\") " pod="kube-system/kube-proxy-gdm8d"
	Dec 29 07:52:21 old-k8s-version-512867 kubelet[1380]: I1229 07:52:21.017088    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc2l9\" (UniqueName: \"kubernetes.io/projected/3700dc6c-45b9-4abc-a322-f283999cca24-kube-api-access-zc2l9\") pod \"kindnet-rwfkz\" (UID: \"3700dc6c-45b9-4abc-a322-f283999cca24\") " pod="kube-system/kindnet-rwfkz"
	Dec 29 07:52:21 old-k8s-version-512867 kubelet[1380]: I1229 07:52:21.017188    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3700dc6c-45b9-4abc-a322-f283999cca24-lib-modules\") pod \"kindnet-rwfkz\" (UID: \"3700dc6c-45b9-4abc-a322-f283999cca24\") " pod="kube-system/kindnet-rwfkz"
	Dec 29 07:52:21 old-k8s-version-512867 kubelet[1380]: I1229 07:52:21.017284    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2925\" (UniqueName: \"kubernetes.io/projected/da0bed78-1180-482b-89a3-cccbfe791640-kube-api-access-t2925\") pod \"kube-proxy-gdm8d\" (UID: \"da0bed78-1180-482b-89a3-cccbfe791640\") " pod="kube-system/kube-proxy-gdm8d"
	Dec 29 07:52:21 old-k8s-version-512867 kubelet[1380]: I1229 07:52:21.017375    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3700dc6c-45b9-4abc-a322-f283999cca24-cni-cfg\") pod \"kindnet-rwfkz\" (UID: \"3700dc6c-45b9-4abc-a322-f283999cca24\") " pod="kube-system/kindnet-rwfkz"
	Dec 29 07:52:25 old-k8s-version-512867 kubelet[1380]: I1229 07:52:25.152875    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gdm8d" podStartSLOduration=5.152832629 podCreationTimestamp="2025-12-29 07:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:52:22.181607038 +0000 UTC m=+14.368203247" watchObservedRunningTime="2025-12-29 07:52:25.152832629 +0000 UTC m=+17.339428822"
	Dec 29 07:52:35 old-k8s-version-512867 kubelet[1380]: I1229 07:52:35.273454    1380 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 29 07:52:35 old-k8s-version-512867 kubelet[1380]: I1229 07:52:35.306147    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-rwfkz" podStartSLOduration=11.887532272 podCreationTimestamp="2025-12-29 07:52:20 +0000 UTC" firstStartedPulling="2025-12-29 07:52:21.273081208 +0000 UTC m=+13.459677401" lastFinishedPulling="2025-12-29 07:52:24.691655343 +0000 UTC m=+16.878251528" observedRunningTime="2025-12-29 07:52:25.153207137 +0000 UTC m=+17.339803322" watchObservedRunningTime="2025-12-29 07:52:35.306106399 +0000 UTC m=+27.492702584"
	Dec 29 07:52:35 old-k8s-version-512867 kubelet[1380]: I1229 07:52:35.306562    1380 topology_manager.go:215] "Topology Admit Handler" podUID="38a5da82-9cb7-4f2a-8258-393b1cb27c8c" podNamespace="kube-system" podName="coredns-5dd5756b68-wr2cl"
	Dec 29 07:52:35 old-k8s-version-512867 kubelet[1380]: I1229 07:52:35.314893    1380 topology_manager.go:215] "Topology Admit Handler" podUID="05d3f69a-6463-4fc2-b3f8-fb6ed350c50f" podNamespace="kube-system" podName="storage-provisioner"
	Dec 29 07:52:35 old-k8s-version-512867 kubelet[1380]: I1229 07:52:35.425236    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38a5da82-9cb7-4f2a-8258-393b1cb27c8c-config-volume\") pod \"coredns-5dd5756b68-wr2cl\" (UID: \"38a5da82-9cb7-4f2a-8258-393b1cb27c8c\") " pod="kube-system/coredns-5dd5756b68-wr2cl"
	Dec 29 07:52:35 old-k8s-version-512867 kubelet[1380]: I1229 07:52:35.425302    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6gq7\" (UniqueName: \"kubernetes.io/projected/38a5da82-9cb7-4f2a-8258-393b1cb27c8c-kube-api-access-s6gq7\") pod \"coredns-5dd5756b68-wr2cl\" (UID: \"38a5da82-9cb7-4f2a-8258-393b1cb27c8c\") " pod="kube-system/coredns-5dd5756b68-wr2cl"
	Dec 29 07:52:35 old-k8s-version-512867 kubelet[1380]: I1229 07:52:35.425338    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/05d3f69a-6463-4fc2-b3f8-fb6ed350c50f-tmp\") pod \"storage-provisioner\" (UID: \"05d3f69a-6463-4fc2-b3f8-fb6ed350c50f\") " pod="kube-system/storage-provisioner"
	Dec 29 07:52:35 old-k8s-version-512867 kubelet[1380]: I1229 07:52:35.425363    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km44d\" (UniqueName: \"kubernetes.io/projected/05d3f69a-6463-4fc2-b3f8-fb6ed350c50f-kube-api-access-km44d\") pod \"storage-provisioner\" (UID: \"05d3f69a-6463-4fc2-b3f8-fb6ed350c50f\") " pod="kube-system/storage-provisioner"
	Dec 29 07:52:35 old-k8s-version-512867 kubelet[1380]: W1229 07:52:35.655738    1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/crio-b8b054bdfc20a3f0958af92aedf992b1a9b6b2aa14c078b649ac03c9696d1379 WatchSource:0}: Error finding container b8b054bdfc20a3f0958af92aedf992b1a9b6b2aa14c078b649ac03c9696d1379: Status 404 returned error can't find the container with id b8b054bdfc20a3f0958af92aedf992b1a9b6b2aa14c078b649ac03c9696d1379
	Dec 29 07:52:36 old-k8s-version-512867 kubelet[1380]: I1229 07:52:36.194394    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wr2cl" podStartSLOduration=15.194351636 podCreationTimestamp="2025-12-29 07:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:52:36.177714256 +0000 UTC m=+28.364310457" watchObservedRunningTime="2025-12-29 07:52:36.194351636 +0000 UTC m=+28.380947821"
	Dec 29 07:52:38 old-k8s-version-512867 kubelet[1380]: I1229 07:52:38.266133    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.266072699 podCreationTimestamp="2025-12-29 07:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:52:36.234881852 +0000 UTC m=+28.421478045" watchObservedRunningTime="2025-12-29 07:52:38.266072699 +0000 UTC m=+30.452668908"
	Dec 29 07:52:38 old-k8s-version-512867 kubelet[1380]: I1229 07:52:38.266572    1380 topology_manager.go:215] "Topology Admit Handler" podUID="76520809-8782-4067-b741-af44649b8703" podNamespace="default" podName="busybox"
	Dec 29 07:52:38 old-k8s-version-512867 kubelet[1380]: I1229 07:52:38.441169    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ptkj\" (UniqueName: \"kubernetes.io/projected/76520809-8782-4067-b741-af44649b8703-kube-api-access-9ptkj\") pod \"busybox\" (UID: \"76520809-8782-4067-b741-af44649b8703\") " pod="default/busybox"
	
	
	==> storage-provisioner [bba7f0c152c9aa2e43c038a12a0e1262a2d765db47c42f59f31a7725fd739c4b] <==
	I1229 07:52:35.694962       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:52:35.723369       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:52:35.723488       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 07:52:35.732981       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:52:35.733371       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-512867_282bcd09-7ff6-4ed2-ad03-882ca26ee422!
	I1229 07:52:35.746151       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"705bfbd5-8239-46ab-90c2-30c511af11a6", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-512867_282bcd09-7ff6-4ed2-ad03-882ca26ee422 became leader
	I1229 07:52:35.837565       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-512867_282bcd09-7ff6-4ed2-ad03-882ca26ee422!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-512867 -n old-k8s-version-512867
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-512867 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-512867 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-512867 --alsologtostderr -v=1: exit status 80 (2.437712412s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-512867 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:54:01.337202  926508 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:54:01.337788  926508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:54:01.337805  926508 out.go:374] Setting ErrFile to fd 2...
	I1229 07:54:01.337812  926508 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:54:01.338626  926508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:54:01.339071  926508 out.go:368] Setting JSON to false
	I1229 07:54:01.339155  926508 mustload.go:66] Loading cluster: old-k8s-version-512867
	I1229 07:54:01.339844  926508 config.go:182] Loaded profile config "old-k8s-version-512867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:54:01.340585  926508 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:54:01.359459  926508 host.go:66] Checking if "old-k8s-version-512867" exists ...
	I1229 07:54:01.359805  926508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:54:01.424064  926508 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-29 07:54:01.414753416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:54:01.424709  926508 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766979747-22353/minikube-v1.37.0-1766979747-22353-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766979747-22353-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-512867 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1229 07:54:01.430152  926508 out.go:179] * Pausing node old-k8s-version-512867 ... 
	I1229 07:54:01.433076  926508 host.go:66] Checking if "old-k8s-version-512867" exists ...
	I1229 07:54:01.433441  926508 ssh_runner.go:195] Run: systemctl --version
	I1229 07:54:01.433490  926508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:54:01.462515  926508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:54:01.575515  926508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:54:01.588579  926508 pause.go:52] kubelet running: true
	I1229 07:54:01.588652  926508 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:54:01.808152  926508 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:54:01.808237  926508 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:54:01.883829  926508 cri.go:96] found id: "ec432e862408cfa12c66b8854611c25341511d99e01a67f249f6e73b55a781a5"
	I1229 07:54:01.883853  926508 cri.go:96] found id: "ac519663e98ff88103f47d543ecf11f3edaef9bbf0f16273dd7dbd903f11025a"
	I1229 07:54:01.883874  926508 cri.go:96] found id: "68fd186fabb99a7faa9f17108a6cf9ed8b90d645a067f806eb0d6aed359f08ac"
	I1229 07:54:01.883878  926508 cri.go:96] found id: "52aa628b40d1b389c30840509ea0f999ff48964de6ba998a2f49e58559f8d12c"
	I1229 07:54:01.883883  926508 cri.go:96] found id: "cf17b98ff46bb1bafa413e0da3f2d55d6ac9e3e13454e07df58fdb87b249226e"
	I1229 07:54:01.883886  926508 cri.go:96] found id: "39a8d4c108d73b046c789c0e818948e14860be2e2e26ee58c79711385c65a4e9"
	I1229 07:54:01.883890  926508 cri.go:96] found id: "4500ff662d9c4502d7b5756775084ffe6a9bc24207b53bd5b3de45a409cc5b3b"
	I1229 07:54:01.883893  926508 cri.go:96] found id: "0a77089c3baf8e5da671cb7347c1a4bf5ba26ae15659db5260e0ee7f08a73401"
	I1229 07:54:01.883896  926508 cri.go:96] found id: "946047cfc91045ea55c69cede1ead96528f22108c6c76f45136fa6565ed7d09d"
	I1229 07:54:01.883903  926508 cri.go:96] found id: "b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3"
	I1229 07:54:01.883914  926508 cri.go:96] found id: "43102affada64f3948914ab09a70e4608524e9afb6ab6d838c6ad810b7638c93"
	I1229 07:54:01.883920  926508 cri.go:96] found id: ""
	I1229 07:54:01.883976  926508 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:54:01.902445  926508 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:54:01Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:54:02.145938  926508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:54:02.160516  926508 pause.go:52] kubelet running: false
	I1229 07:54:02.160592  926508 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:54:02.360739  926508 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:54:02.360910  926508 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:54:02.431985  926508 cri.go:96] found id: "ec432e862408cfa12c66b8854611c25341511d99e01a67f249f6e73b55a781a5"
	I1229 07:54:02.432011  926508 cri.go:96] found id: "ac519663e98ff88103f47d543ecf11f3edaef9bbf0f16273dd7dbd903f11025a"
	I1229 07:54:02.432016  926508 cri.go:96] found id: "68fd186fabb99a7faa9f17108a6cf9ed8b90d645a067f806eb0d6aed359f08ac"
	I1229 07:54:02.432020  926508 cri.go:96] found id: "52aa628b40d1b389c30840509ea0f999ff48964de6ba998a2f49e58559f8d12c"
	I1229 07:54:02.432024  926508 cri.go:96] found id: "cf17b98ff46bb1bafa413e0da3f2d55d6ac9e3e13454e07df58fdb87b249226e"
	I1229 07:54:02.432028  926508 cri.go:96] found id: "39a8d4c108d73b046c789c0e818948e14860be2e2e26ee58c79711385c65a4e9"
	I1229 07:54:02.432031  926508 cri.go:96] found id: "4500ff662d9c4502d7b5756775084ffe6a9bc24207b53bd5b3de45a409cc5b3b"
	I1229 07:54:02.432035  926508 cri.go:96] found id: "0a77089c3baf8e5da671cb7347c1a4bf5ba26ae15659db5260e0ee7f08a73401"
	I1229 07:54:02.432038  926508 cri.go:96] found id: "946047cfc91045ea55c69cede1ead96528f22108c6c76f45136fa6565ed7d09d"
	I1229 07:54:02.432100  926508 cri.go:96] found id: "b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3"
	I1229 07:54:02.432111  926508 cri.go:96] found id: "43102affada64f3948914ab09a70e4608524e9afb6ab6d838c6ad810b7638c93"
	I1229 07:54:02.432114  926508 cri.go:96] found id: ""
	I1229 07:54:02.432178  926508 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:54:02.842097  926508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:54:02.855937  926508 pause.go:52] kubelet running: false
	I1229 07:54:02.856019  926508 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:54:03.044648  926508 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:54:03.044817  926508 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:54:03.114202  926508 cri.go:96] found id: "ec432e862408cfa12c66b8854611c25341511d99e01a67f249f6e73b55a781a5"
	I1229 07:54:03.114278  926508 cri.go:96] found id: "ac519663e98ff88103f47d543ecf11f3edaef9bbf0f16273dd7dbd903f11025a"
	I1229 07:54:03.114292  926508 cri.go:96] found id: "68fd186fabb99a7faa9f17108a6cf9ed8b90d645a067f806eb0d6aed359f08ac"
	I1229 07:54:03.114297  926508 cri.go:96] found id: "52aa628b40d1b389c30840509ea0f999ff48964de6ba998a2f49e58559f8d12c"
	I1229 07:54:03.114300  926508 cri.go:96] found id: "cf17b98ff46bb1bafa413e0da3f2d55d6ac9e3e13454e07df58fdb87b249226e"
	I1229 07:54:03.114305  926508 cri.go:96] found id: "39a8d4c108d73b046c789c0e818948e14860be2e2e26ee58c79711385c65a4e9"
	I1229 07:54:03.114308  926508 cri.go:96] found id: "4500ff662d9c4502d7b5756775084ffe6a9bc24207b53bd5b3de45a409cc5b3b"
	I1229 07:54:03.114311  926508 cri.go:96] found id: "0a77089c3baf8e5da671cb7347c1a4bf5ba26ae15659db5260e0ee7f08a73401"
	I1229 07:54:03.114314  926508 cri.go:96] found id: "946047cfc91045ea55c69cede1ead96528f22108c6c76f45136fa6565ed7d09d"
	I1229 07:54:03.114326  926508 cri.go:96] found id: "b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3"
	I1229 07:54:03.114359  926508 cri.go:96] found id: "43102affada64f3948914ab09a70e4608524e9afb6ab6d838c6ad810b7638c93"
	I1229 07:54:03.114370  926508 cri.go:96] found id: ""
	I1229 07:54:03.114433  926508 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:54:03.419202  926508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:54:03.435373  926508 pause.go:52] kubelet running: false
	I1229 07:54:03.435441  926508 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:54:03.623863  926508 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:54:03.623939  926508 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:54:03.697347  926508 cri.go:96] found id: "ec432e862408cfa12c66b8854611c25341511d99e01a67f249f6e73b55a781a5"
	I1229 07:54:03.697383  926508 cri.go:96] found id: "ac519663e98ff88103f47d543ecf11f3edaef9bbf0f16273dd7dbd903f11025a"
	I1229 07:54:03.697388  926508 cri.go:96] found id: "68fd186fabb99a7faa9f17108a6cf9ed8b90d645a067f806eb0d6aed359f08ac"
	I1229 07:54:03.697392  926508 cri.go:96] found id: "52aa628b40d1b389c30840509ea0f999ff48964de6ba998a2f49e58559f8d12c"
	I1229 07:54:03.697395  926508 cri.go:96] found id: "cf17b98ff46bb1bafa413e0da3f2d55d6ac9e3e13454e07df58fdb87b249226e"
	I1229 07:54:03.697399  926508 cri.go:96] found id: "39a8d4c108d73b046c789c0e818948e14860be2e2e26ee58c79711385c65a4e9"
	I1229 07:54:03.697401  926508 cri.go:96] found id: "4500ff662d9c4502d7b5756775084ffe6a9bc24207b53bd5b3de45a409cc5b3b"
	I1229 07:54:03.697405  926508 cri.go:96] found id: "0a77089c3baf8e5da671cb7347c1a4bf5ba26ae15659db5260e0ee7f08a73401"
	I1229 07:54:03.697408  926508 cri.go:96] found id: "946047cfc91045ea55c69cede1ead96528f22108c6c76f45136fa6565ed7d09d"
	I1229 07:54:03.697414  926508 cri.go:96] found id: "b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3"
	I1229 07:54:03.697418  926508 cri.go:96] found id: "43102affada64f3948914ab09a70e4608524e9afb6ab6d838c6ad810b7638c93"
	I1229 07:54:03.697421  926508 cri.go:96] found id: ""
	I1229 07:54:03.697472  926508 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:54:03.713681  926508 out.go:203] 
	W1229 07:54:03.716545  926508 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:54:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:54:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:54:03.716567  926508 out.go:285] * 
	* 
	W1229 07:54:03.720759  926508 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:54:03.723832  926508 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-512867 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-512867
helpers_test.go:244: (dbg) docker inspect old-k8s-version-512867:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f",
	        "Created": "2025-12-29T07:51:42.199419041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 923884,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:53:02.588555297Z",
	            "FinishedAt": "2025-12-29T07:53:01.75247634Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/hostname",
	        "HostsPath": "/var/lib/docker/containers/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/hosts",
	        "LogPath": "/var/lib/docker/containers/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f-json.log",
	        "Name": "/old-k8s-version-512867",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-512867:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-512867",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f",
	                "LowerDir": "/var/lib/docker/overlay2/03c393ff5bda96028dbc074a1b051ac8e6752eef90d48a1136552564119804c0-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03c393ff5bda96028dbc074a1b051ac8e6752eef90d48a1136552564119804c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03c393ff5bda96028dbc074a1b051ac8e6752eef90d48a1136552564119804c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03c393ff5bda96028dbc074a1b051ac8e6752eef90d48a1136552564119804c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-512867",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-512867/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-512867",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-512867",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-512867",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e27959998c39c4eaaca926d494ccccadf6d22bbdd8bbd37ffa7f68c44216cd5",
	            "SandboxKey": "/var/run/docker/netns/7e27959998c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-512867": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:87:0d:6b:2f:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "41cc9a6bc794235538e736c190e047c9f1df45f342ddeffb05ec66c680fed733",
	                    "EndpointID": "e2dde20e147f6bbbe546be5ab8fae83abf24ea4128cd4f5de704f5f04d0b2f76",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-512867",
	                        "f522ea5fbf77"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-512867 -n old-k8s-version-512867
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-512867 -n old-k8s-version-512867: exit status 2 (350.107283ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-512867 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-512867 logs -n 25: (1.331537373s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-095300 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo containerd config dump                                                                                                                                                                                                  │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo crio config                                                                                                                                                                                                             │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ delete  │ -p cilium-095300                                                                                                                                                                                                                              │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │ 29 Dec 25 07:45 UTC │
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │ 29 Dec 25 07:46 UTC │
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ delete  │ -p cert-expiration-853545                                                                                                                                                                                                                     │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ start   │ -p force-systemd-flag-122604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-122604 │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │                     │
	│ delete  │ -p force-systemd-env-002562                                                                                                                                                                                                                   │ force-systemd-env-002562  │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p cert-options-600463 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ cert-options-600463 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ -p cert-options-600463 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ delete  │ -p cert-options-600463                                                                                                                                                                                                                        │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-512867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │                     │
	│ stop    │ -p old-k8s-version-512867 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │ 29 Dec 25 07:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-512867 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ image   │ old-k8s-version-512867 image list --format=json                                                                                                                                                                                               │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ pause   │ -p old-k8s-version-512867 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:53:02
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:53:02.303868  923761 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:53:02.304068  923761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:53:02.304096  923761 out.go:374] Setting ErrFile to fd 2...
	I1229 07:53:02.304117  923761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:53:02.304403  923761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:53:02.304872  923761 out.go:368] Setting JSON to false
	I1229 07:53:02.305787  923761 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16534,"bootTime":1766978248,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:53:02.305887  923761 start.go:143] virtualization:  
	I1229 07:53:02.311032  923761 out.go:179] * [old-k8s-version-512867] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:53:02.314136  923761 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:53:02.314277  923761 notify.go:221] Checking for updates...
	I1229 07:53:02.320176  923761 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:53:02.323166  923761 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:53:02.326101  923761 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:53:02.329506  923761 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:53:02.332466  923761 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:53:02.335714  923761 config.go:182] Loaded profile config "old-k8s-version-512867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:53:02.339156  923761 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1229 07:53:02.342015  923761 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:53:02.376196  923761 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:53:02.376331  923761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:53:02.434735  923761 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:53:02.425265587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:53:02.434858  923761 docker.go:319] overlay module found
	I1229 07:53:02.438041  923761 out.go:179] * Using the docker driver based on existing profile
	I1229 07:53:02.440932  923761 start.go:309] selected driver: docker
	I1229 07:53:02.440957  923761 start.go:928] validating driver "docker" against &{Name:old-k8s-version-512867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:53:02.441067  923761 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:53:02.441783  923761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:53:02.501062  923761 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:53:02.4912873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:53:02.501405  923761 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:53:02.501443  923761 cni.go:84] Creating CNI manager for ""
	I1229 07:53:02.501497  923761 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:53:02.501541  923761 start.go:353] cluster config:
	{Name:old-k8s-version-512867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:53:02.506694  923761 out.go:179] * Starting "old-k8s-version-512867" primary control-plane node in "old-k8s-version-512867" cluster
	I1229 07:53:02.509567  923761 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:53:02.512474  923761 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:53:02.515476  923761 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 07:53:02.515533  923761 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:53:02.515547  923761 cache.go:65] Caching tarball of preloaded images
	I1229 07:53:02.515552  923761 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:53:02.515651  923761 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:53:02.515662  923761 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1229 07:53:02.515778  923761 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/config.json ...
	I1229 07:53:02.535356  923761 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:53:02.535376  923761 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:53:02.535390  923761 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:53:02.535422  923761 start.go:360] acquireMachinesLock for old-k8s-version-512867: {Name:mke66a3bfc4d2544e7318e37a0e696241f8d3b35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:53:02.535472  923761 start.go:364] duration metric: took 34.035µs to acquireMachinesLock for "old-k8s-version-512867"
	I1229 07:53:02.535498  923761 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:53:02.535503  923761 fix.go:54] fixHost starting: 
	I1229 07:53:02.535760  923761 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:53:02.552752  923761 fix.go:112] recreateIfNeeded on old-k8s-version-512867: state=Stopped err=<nil>
	W1229 07:53:02.552813  923761 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:53:02.556145  923761 out.go:252] * Restarting existing docker container for "old-k8s-version-512867" ...
	I1229 07:53:02.556238  923761 cli_runner.go:164] Run: docker start old-k8s-version-512867
	I1229 07:53:02.807858  923761 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:53:02.838309  923761 kic.go:430] container "old-k8s-version-512867" state is running.
	I1229 07:53:02.838707  923761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-512867
	I1229 07:53:02.859959  923761 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/config.json ...
	I1229 07:53:02.860179  923761 machine.go:94] provisionDockerMachine start ...
	I1229 07:53:02.860234  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:02.881489  923761 main.go:144] libmachine: Using SSH client type: native
	I1229 07:53:02.881809  923761 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I1229 07:53:02.881817  923761 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:53:02.882502  923761 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1229 07:53:06.044725  923761 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-512867
	
	I1229 07:53:06.044843  923761 ubuntu.go:182] provisioning hostname "old-k8s-version-512867"
	I1229 07:53:06.044934  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:06.063856  923761 main.go:144] libmachine: Using SSH client type: native
	I1229 07:53:06.064189  923761 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I1229 07:53:06.064202  923761 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-512867 && echo "old-k8s-version-512867" | sudo tee /etc/hostname
	I1229 07:53:06.226551  923761 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-512867
	
	I1229 07:53:06.226632  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:06.244939  923761 main.go:144] libmachine: Using SSH client type: native
	I1229 07:53:06.245263  923761 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I1229 07:53:06.245285  923761 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-512867' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-512867/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-512867' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:53:06.397228  923761 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:53:06.397319  923761 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:53:06.397374  923761 ubuntu.go:190] setting up certificates
	I1229 07:53:06.397405  923761 provision.go:84] configureAuth start
	I1229 07:53:06.397509  923761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-512867
	I1229 07:53:06.414732  923761 provision.go:143] copyHostCerts
	I1229 07:53:06.414807  923761 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:53:06.414825  923761 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:53:06.414908  923761 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:53:06.415016  923761 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:53:06.415022  923761 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:53:06.415049  923761 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:53:06.415106  923761 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:53:06.415111  923761 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:53:06.415134  923761 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:53:06.415236  923761 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-512867 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-512867]
	I1229 07:53:06.740444  923761 provision.go:177] copyRemoteCerts
	I1229 07:53:06.740563  923761 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:53:06.740619  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:06.758956  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:06.864911  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:53:06.882973  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1229 07:53:06.901435  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:53:06.919886  923761 provision.go:87] duration metric: took 522.443947ms to configureAuth
	I1229 07:53:06.919916  923761 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:53:06.920162  923761 config.go:182] Loaded profile config "old-k8s-version-512867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:53:06.920278  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:06.937558  923761 main.go:144] libmachine: Using SSH client type: native
	I1229 07:53:06.937867  923761 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I1229 07:53:06.937892  923761 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:53:07.295849  923761 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:53:07.295875  923761 machine.go:97] duration metric: took 4.435687239s to provisionDockerMachine
	I1229 07:53:07.295895  923761 start.go:293] postStartSetup for "old-k8s-version-512867" (driver="docker")
	I1229 07:53:07.295907  923761 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:53:07.295977  923761 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:53:07.296025  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:07.317498  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:07.429238  923761 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:53:07.432927  923761 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:53:07.432957  923761 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:53:07.432969  923761 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:53:07.433024  923761 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:53:07.433102  923761 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:53:07.433210  923761 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:53:07.442301  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:53:07.463382  923761 start.go:296] duration metric: took 167.470934ms for postStartSetup
	I1229 07:53:07.463462  923761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:53:07.463515  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:07.487352  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:07.597826  923761 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:53:07.602632  923761 fix.go:56] duration metric: took 5.067120778s for fixHost
	I1229 07:53:07.602662  923761 start.go:83] releasing machines lock for "old-k8s-version-512867", held for 5.06717933s
	I1229 07:53:07.602762  923761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-512867
	I1229 07:53:07.620748  923761 ssh_runner.go:195] Run: cat /version.json
	I1229 07:53:07.620828  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:07.621142  923761 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:53:07.621209  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:07.641976  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:07.643674  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:07.836600  923761 ssh_runner.go:195] Run: systemctl --version
	I1229 07:53:07.843262  923761 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:53:07.883760  923761 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:53:07.888483  923761 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:53:07.888583  923761 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:53:07.896862  923761 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:53:07.896892  923761 start.go:496] detecting cgroup driver to use...
	I1229 07:53:07.896929  923761 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:53:07.896979  923761 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:53:07.912529  923761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:53:07.926038  923761 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:53:07.926102  923761 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:53:07.941563  923761 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:53:07.954724  923761 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:53:08.070894  923761 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:53:08.198322  923761 docker.go:234] disabling docker service ...
	I1229 07:53:08.198469  923761 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:53:08.215146  923761 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:53:08.229573  923761 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:53:08.356978  923761 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:53:08.478427  923761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:53:08.491252  923761 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:53:08.504988  923761 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1229 07:53:08.505076  923761 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.513956  923761 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:53:08.514048  923761 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.523659  923761 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.532711  923761 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.541905  923761 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:53:08.549908  923761 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.558735  923761 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.567260  923761 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.576054  923761 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:53:08.583956  923761 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:53:08.591704  923761 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:53:08.703580  923761 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:53:08.891315  923761 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:53:08.891403  923761 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:53:08.895225  923761 start.go:574] Will wait 60s for crictl version
	I1229 07:53:08.895284  923761 ssh_runner.go:195] Run: which crictl
	I1229 07:53:08.898752  923761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:53:08.928142  923761 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:53:08.928239  923761 ssh_runner.go:195] Run: crio --version
	I1229 07:53:08.968526  923761 ssh_runner.go:195] Run: crio --version
	I1229 07:53:09.004414  923761 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.35.0 ...
	I1229 07:53:09.007685  923761 cli_runner.go:164] Run: docker network inspect old-k8s-version-512867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:53:09.025021  923761 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:53:09.029203  923761 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:53:09.038918  923761 kubeadm.go:884] updating cluster {Name:old-k8s-version-512867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:53:09.039040  923761 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 07:53:09.039091  923761 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:53:09.075356  923761 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:53:09.075377  923761 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:53:09.075433  923761 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:53:09.101870  923761 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:53:09.101899  923761 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:53:09.101907  923761 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1229 07:53:09.102008  923761 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-512867 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:53:09.102097  923761 ssh_runner.go:195] Run: crio config
	I1229 07:53:09.169201  923761 cni.go:84] Creating CNI manager for ""
	I1229 07:53:09.169227  923761 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:53:09.169248  923761 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:53:09.169297  923761 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-512867 NodeName:old-k8s-version-512867 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:53:09.169486  923761 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-512867"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:53:09.169601  923761 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1229 07:53:09.177667  923761 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:53:09.177751  923761 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:53:09.185180  923761 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1229 07:53:09.197578  923761 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:53:09.210297  923761 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1229 07:53:09.223110  923761 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:53:09.227157  923761 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:53:09.237003  923761 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:53:09.355975  923761 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:53:09.376812  923761 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867 for IP: 192.168.85.2
	I1229 07:53:09.376918  923761 certs.go:195] generating shared ca certs ...
	I1229 07:53:09.376961  923761 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:53:09.377280  923761 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:53:09.377437  923761 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:53:09.377537  923761 certs.go:257] generating profile certs ...
	I1229 07:53:09.377738  923761 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.key
	I1229 07:53:09.377940  923761 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.key.14700b57
	I1229 07:53:09.378079  923761 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.key
	I1229 07:53:09.378346  923761 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:53:09.378430  923761 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:53:09.378489  923761 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:53:09.378567  923761 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:53:09.378685  923761 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:53:09.378904  923761 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:53:09.379077  923761 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:53:09.380092  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:53:09.409445  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:53:09.435746  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:53:09.464410  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:53:09.491716  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1229 07:53:09.531766  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:53:09.566179  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:53:09.592620  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:53:09.617423  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:53:09.636630  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:53:09.656141  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:53:09.675718  923761 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:53:09.694021  923761 ssh_runner.go:195] Run: openssl version
	I1229 07:53:09.701080  923761 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:53:09.710851  923761 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:53:09.720846  923761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:53:09.725704  923761 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:53:09.725828  923761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:53:09.768625  923761 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:53:09.776738  923761 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:53:09.784663  923761 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:53:09.792709  923761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:53:09.796737  923761 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:53:09.796829  923761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:53:09.838055  923761 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:53:09.845901  923761 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:53:09.853368  923761 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:53:09.861059  923761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:53:09.864961  923761 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:53:09.865053  923761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:53:09.907356  923761 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:53:09.915023  923761 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:53:09.920176  923761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:53:09.961608  923761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:53:10.003591  923761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:53:10.064199  923761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:53:10.129414  923761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:53:10.203555  923761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:53:10.289514  923761 kubeadm.go:401] StartCluster: {Name:old-k8s-version-512867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:53:10.289606  923761 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:53:10.289686  923761 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:53:10.336584  923761 cri.go:96] found id: "39a8d4c108d73b046c789c0e818948e14860be2e2e26ee58c79711385c65a4e9"
	I1229 07:53:10.336609  923761 cri.go:96] found id: "4500ff662d9c4502d7b5756775084ffe6a9bc24207b53bd5b3de45a409cc5b3b"
	I1229 07:53:10.336615  923761 cri.go:96] found id: "0a77089c3baf8e5da671cb7347c1a4bf5ba26ae15659db5260e0ee7f08a73401"
	I1229 07:53:10.336618  923761 cri.go:96] found id: "946047cfc91045ea55c69cede1ead96528f22108c6c76f45136fa6565ed7d09d"
	I1229 07:53:10.336628  923761 cri.go:96] found id: ""
	I1229 07:53:10.336676  923761 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:53:10.351590  923761 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:53:10Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:53:10.351671  923761 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:53:10.361855  923761 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:53:10.361878  923761 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:53:10.361930  923761 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:53:10.372459  923761 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:53:10.372934  923761 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-512867" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:53:10.373039  923761 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-739997/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-512867" cluster setting kubeconfig missing "old-k8s-version-512867" context setting]
	I1229 07:53:10.373324  923761 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:53:10.374713  923761 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:53:10.386773  923761 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1229 07:53:10.386808  923761 kubeadm.go:602] duration metric: took 24.924118ms to restartPrimaryControlPlane
	I1229 07:53:10.386819  923761 kubeadm.go:403] duration metric: took 97.316176ms to StartCluster
	I1229 07:53:10.386834  923761 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:53:10.386900  923761 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:53:10.387546  923761 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:53:10.387755  923761 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:53:10.388151  923761 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:53:10.388224  923761 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-512867"
	I1229 07:53:10.388237  923761 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-512867"
	W1229 07:53:10.388243  923761 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:53:10.388264  923761 host.go:66] Checking if "old-k8s-version-512867" exists ...
	I1229 07:53:10.388726  923761 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:53:10.389238  923761 config.go:182] Loaded profile config "old-k8s-version-512867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:53:10.389334  923761 addons.go:70] Setting dashboard=true in profile "old-k8s-version-512867"
	I1229 07:53:10.389361  923761 addons.go:239] Setting addon dashboard=true in "old-k8s-version-512867"
	W1229 07:53:10.389391  923761 addons.go:248] addon dashboard should already be in state true
	I1229 07:53:10.389434  923761 host.go:66] Checking if "old-k8s-version-512867" exists ...
	I1229 07:53:10.389865  923761 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:53:10.392576  923761 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-512867"
	I1229 07:53:10.392614  923761 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-512867"
	I1229 07:53:10.392799  923761 out.go:179] * Verifying Kubernetes components...
	I1229 07:53:10.393274  923761 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:53:10.395997  923761 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:53:10.446447  923761 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-512867"
	W1229 07:53:10.446469  923761 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:53:10.446493  923761 host.go:66] Checking if "old-k8s-version-512867" exists ...
	I1229 07:53:10.446911  923761 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:53:10.451225  923761 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:53:10.457629  923761 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1229 07:53:10.457746  923761 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:53:10.460960  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:53:10.460987  923761 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:53:10.461055  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:10.461285  923761 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:53:10.461297  923761 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:53:10.461336  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:10.492524  923761 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:53:10.492545  923761 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:53:10.492606  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:10.508888  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:10.538144  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:10.541338  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:10.769268  923761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:53:10.776962  923761 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:53:10.813865  923761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:53:10.861816  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:53:10.861886  923761 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:53:10.950428  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:53:10.950493  923761 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:53:11.051662  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:53:11.051731  923761 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:53:11.097672  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:53:11.097738  923761 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:53:11.121337  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:53:11.121403  923761 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1229 07:53:11.146578  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:53:11.146645  923761 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:53:11.170206  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:53:11.170292  923761 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:53:11.190367  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:53:11.190444  923761 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:53:11.227925  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:53:11.228004  923761 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:53:11.254148  923761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:53:16.371056  923761 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.601750549s)
	I1229 07:53:16.371115  923761 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.594132849s)
	I1229 07:53:16.371151  923761 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-512867" to be "Ready" ...
	I1229 07:53:16.371485  923761 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.557597023s)
	I1229 07:53:16.397571  923761 node_ready.go:49] node "old-k8s-version-512867" is "Ready"
	I1229 07:53:16.397596  923761 node_ready.go:38] duration metric: took 26.4331ms for node "old-k8s-version-512867" to be "Ready" ...
	I1229 07:53:16.397609  923761 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:53:16.397664  923761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:53:16.982870  923761 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.728627073s)
	I1229 07:53:16.982931  923761 api_server.go:72] duration metric: took 6.595140301s to wait for apiserver process to appear ...
	I1229 07:53:16.983106  923761 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:53:16.983132  923761 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:53:16.986122  923761 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-512867 addons enable metrics-server
	
	I1229 07:53:16.989064  923761 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1229 07:53:16.991536  923761 addons.go:530] duration metric: took 6.603383162s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1229 07:53:16.995332  923761 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1229 07:53:16.996876  923761 api_server.go:141] control plane version: v1.28.0
	I1229 07:53:16.996902  923761 api_server.go:131] duration metric: took 13.787762ms to wait for apiserver health ...
	I1229 07:53:16.996912  923761 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:53:17.001026  923761 system_pods.go:59] 8 kube-system pods found
	I1229 07:53:17.001072  923761 system_pods.go:61] "coredns-5dd5756b68-wr2cl" [38a5da82-9cb7-4f2a-8258-393b1cb27c8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:53:17.001082  923761 system_pods.go:61] "etcd-old-k8s-version-512867" [174e6131-3bb9-4c27-b831-a7dec9f07c17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:53:17.001089  923761 system_pods.go:61] "kindnet-rwfkz" [3700dc6c-45b9-4abc-a322-f283999cca24] Running
	I1229 07:53:17.001096  923761 system_pods.go:61] "kube-apiserver-old-k8s-version-512867" [2ff4594c-21b0-4a6b-be02-cadbcc4332ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:53:17.001103  923761 system_pods.go:61] "kube-controller-manager-old-k8s-version-512867" [ac43a8d0-cf03-43b8-9d59-5d1bc7938de5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:53:17.001108  923761 system_pods.go:61] "kube-proxy-gdm8d" [da0bed78-1180-482b-89a3-cccbfe791640] Running
	I1229 07:53:17.001115  923761 system_pods.go:61] "kube-scheduler-old-k8s-version-512867" [595f8c3c-4177-40a3-bd88-b0794f8ac76f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:53:17.001120  923761 system_pods.go:61] "storage-provisioner" [05d3f69a-6463-4fc2-b3f8-fb6ed350c50f] Running
	I1229 07:53:17.001128  923761 system_pods.go:74] duration metric: took 4.210362ms to wait for pod list to return data ...
	I1229 07:53:17.001139  923761 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:53:17.005256  923761 default_sa.go:45] found service account: "default"
	I1229 07:53:17.005341  923761 default_sa.go:55] duration metric: took 4.191039ms for default service account to be created ...
	I1229 07:53:17.005359  923761 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:53:17.009713  923761 system_pods.go:86] 8 kube-system pods found
	I1229 07:53:17.009750  923761 system_pods.go:89] "coredns-5dd5756b68-wr2cl" [38a5da82-9cb7-4f2a-8258-393b1cb27c8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:53:17.009761  923761 system_pods.go:89] "etcd-old-k8s-version-512867" [174e6131-3bb9-4c27-b831-a7dec9f07c17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:53:17.009768  923761 system_pods.go:89] "kindnet-rwfkz" [3700dc6c-45b9-4abc-a322-f283999cca24] Running
	I1229 07:53:17.009776  923761 system_pods.go:89] "kube-apiserver-old-k8s-version-512867" [2ff4594c-21b0-4a6b-be02-cadbcc4332ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:53:17.009799  923761 system_pods.go:89] "kube-controller-manager-old-k8s-version-512867" [ac43a8d0-cf03-43b8-9d59-5d1bc7938de5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:53:17.009807  923761 system_pods.go:89] "kube-proxy-gdm8d" [da0bed78-1180-482b-89a3-cccbfe791640] Running
	I1229 07:53:17.009815  923761 system_pods.go:89] "kube-scheduler-old-k8s-version-512867" [595f8c3c-4177-40a3-bd88-b0794f8ac76f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:53:17.009824  923761 system_pods.go:89] "storage-provisioner" [05d3f69a-6463-4fc2-b3f8-fb6ed350c50f] Running
	I1229 07:53:17.009832  923761 system_pods.go:126] duration metric: took 4.465757ms to wait for k8s-apps to be running ...
	I1229 07:53:17.009840  923761 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:53:17.009910  923761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:53:17.024112  923761 system_svc.go:56] duration metric: took 14.261971ms WaitForService to wait for kubelet
	I1229 07:53:17.024185  923761 kubeadm.go:587] duration metric: took 6.636394219s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:53:17.024222  923761 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:53:17.027508  923761 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:53:17.027589  923761 node_conditions.go:123] node cpu capacity is 2
	I1229 07:53:17.027617  923761 node_conditions.go:105] duration metric: took 3.355966ms to run NodePressure ...
	I1229 07:53:17.027658  923761 start.go:242] waiting for startup goroutines ...
	I1229 07:53:17.027683  923761 start.go:247] waiting for cluster config update ...
	I1229 07:53:17.027711  923761 start.go:256] writing updated cluster config ...
	I1229 07:53:17.028049  923761 ssh_runner.go:195] Run: rm -f paused
	I1229 07:53:17.032335  923761 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:53:17.037102  923761 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-wr2cl" in "kube-system" namespace to be "Ready" or be gone ...
	W1229 07:53:19.043232  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:21.043800  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:23.542840  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:25.543392  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:27.544137  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:30.047169  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:32.543854  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:35.042861  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:37.044403  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:39.543261  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:41.543575  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:44.043095  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:46.543414  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	I1229 07:53:48.046559  923761 pod_ready.go:94] pod "coredns-5dd5756b68-wr2cl" is "Ready"
	I1229 07:53:48.046585  923761 pod_ready.go:86] duration metric: took 31.009453304s for pod "coredns-5dd5756b68-wr2cl" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.051019  923761 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.060329  923761 pod_ready.go:94] pod "etcd-old-k8s-version-512867" is "Ready"
	I1229 07:53:48.060361  923761 pod_ready.go:86] duration metric: took 9.31473ms for pod "etcd-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.064569  923761 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.070247  923761 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-512867" is "Ready"
	I1229 07:53:48.070275  923761 pod_ready.go:86] duration metric: took 5.676678ms for pod "kube-apiserver-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.073991  923761 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.241075  923761 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-512867" is "Ready"
	I1229 07:53:48.241155  923761 pod_ready.go:86] duration metric: took 167.134535ms for pod "kube-controller-manager-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.442297  923761 pod_ready.go:83] waiting for pod "kube-proxy-gdm8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.841420  923761 pod_ready.go:94] pod "kube-proxy-gdm8d" is "Ready"
	I1229 07:53:48.841452  923761 pod_ready.go:86] duration metric: took 399.066413ms for pod "kube-proxy-gdm8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:49.042305  923761 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:49.443203  923761 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-512867" is "Ready"
	I1229 07:53:49.443227  923761 pod_ready.go:86] duration metric: took 400.894489ms for pod "kube-scheduler-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:49.443240  923761 pod_ready.go:40] duration metric: took 32.410827636s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:53:49.514601  923761 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1229 07:53:49.517822  923761 out.go:203] 
	W1229 07:53:49.520763  923761 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1229 07:53:49.523707  923761 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:53:49.526764  923761 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-512867" cluster and "default" namespace by default
	I1229 07:53:52.201509  913056 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000842749s
	I1229 07:53:52.201542  913056 kubeadm.go:319] 
	I1229 07:53:52.201600  913056 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:53:52.201638  913056 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:53:52.201746  913056 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:53:52.201755  913056 kubeadm.go:319] 
	I1229 07:53:52.201859  913056 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:53:52.201895  913056 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:53:52.201930  913056 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:53:52.201938  913056 kubeadm.go:319] 
	I1229 07:53:52.208144  913056 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:53:52.208619  913056 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:53:52.208743  913056 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:53:52.209029  913056 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1229 07:53:52.209041  913056 kubeadm.go:319] 
	I1229 07:53:52.209109  913056 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1229 07:53:52.209252  913056 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-122604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-122604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000842749s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:53:52.209331  913056 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1229 07:53:52.622384  913056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:53:52.635442  913056 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:53:52.635506  913056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:53:52.644553  913056 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:53:52.644574  913056 kubeadm.go:158] found existing configuration files:
	
	I1229 07:53:52.644655  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:53:52.652636  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:53:52.652702  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:53:52.660083  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:53:52.668000  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:53:52.668064  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:53:52.675501  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:53:52.683499  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:53:52.683585  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:53:52.691378  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:53:52.699077  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:53:52.699141  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:53:52.706928  913056 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:53:52.743355  913056 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:53:52.743680  913056 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:53:52.820188  913056 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:53:52.820265  913056 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:53:52.820300  913056 kubeadm.go:319] OS: Linux
	I1229 07:53:52.820355  913056 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:53:52.820409  913056 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:53:52.820460  913056 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:53:52.820512  913056 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:53:52.820563  913056 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:53:52.820616  913056 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:53:52.820665  913056 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:53:52.820716  913056 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:53:52.820764  913056 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:53:52.898967  913056 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:53:52.899087  913056 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:53:52.899183  913056 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:53:52.909202  913056 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:53:52.915184  913056 out.go:252]   - Generating certificates and keys ...
	I1229 07:53:52.915280  913056 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:53:52.915366  913056 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:53:52.915444  913056 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 07:53:52.915505  913056 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1229 07:53:52.915575  913056 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1229 07:53:52.915628  913056 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1229 07:53:52.915691  913056 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1229 07:53:52.915753  913056 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1229 07:53:52.915833  913056 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 07:53:52.915906  913056 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 07:53:52.915943  913056 kubeadm.go:319] [certs] Using the existing "sa" key
	I1229 07:53:52.916000  913056 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:53:53.163034  913056 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:53:53.260276  913056 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:53:53.509146  913056 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:53:53.923256  913056 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:53:54.338003  913056 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:53:54.338712  913056 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:53:54.341424  913056 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:53:54.344645  913056 out.go:252]   - Booting up control plane ...
	I1229 07:53:54.344752  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:53:54.344850  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:53:54.344919  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:53:54.362719  913056 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:53:54.362842  913056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:53:54.370734  913056 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:53:54.371185  913056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:53:54.371512  913056 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:53:54.549211  913056 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:53:54.549333  913056 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Dec 29 07:53:46 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:46.868936284Z" level=info msg="Started container" PID=1679 containerID=ec432e862408cfa12c66b8854611c25341511d99e01a67f249f6e73b55a781a5 description=kube-system/storage-provisioner/storage-provisioner id=079de392-b02f-4dd4-9dfe-a18dd70fc5b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d47139f4221f5d27794fc0238eb7d1740541674f2e2b8d97e19a9ebae4894bea
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.403353583Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=450b1cfa-f8c5-4140-abdc-3b2e29ab40e3 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.404238429Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=077f73a8-b5c3-4dd6-9150-ba7cb672321e name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.405269681Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw/dashboard-metrics-scraper" id=0a47ace0-5958-462f-b680-1229d523963c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.405392816Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.413799987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.415215101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.432081112Z" level=info msg="Created container b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw/dashboard-metrics-scraper" id=0a47ace0-5958-462f-b680-1229d523963c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.433168659Z" level=info msg="Starting container: b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3" id=b00bdf1b-c264-458f-a4a6-a919a4e98adb name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.434941846Z" level=info msg="Started container" PID=1694 containerID=b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw/dashboard-metrics-scraper id=b00bdf1b-c264-458f-a4a6-a919a4e98adb name=/runtime.v1.RuntimeService/StartContainer sandboxID=022f09fccb461fa5d061c589a1f3fd1b062719138e5c00e02d1b006aae4efb56
	Dec 29 07:53:48 old-k8s-version-512867 conmon[1692]: conmon b7b38088b9bb759e8bf9 <ninfo>: container 1694 exited with status 1
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.850810533Z" level=info msg="Removing container: c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2" id=482274e3-5107-43af-930d-993403faae4d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.858973231Z" level=info msg="Error loading conmon cgroup of container c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2: cgroup deleted" id=482274e3-5107-43af-930d-993403faae4d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.861687405Z" level=info msg="Removed container c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw/dashboard-metrics-scraper" id=482274e3-5107-43af-930d-993403faae4d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.471844989Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.471882117Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.476223911Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.476259981Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.480578201Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.480616618Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.485019508Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.485053214Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.485079594Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.48927239Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.489305876Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	b7b38088b9bb7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   022f09fccb461       dashboard-metrics-scraper-5f989dc9cf-z7mqw       kubernetes-dashboard
	ec432e862408c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago      Running             storage-provisioner         2                   d47139f4221f5       storage-provisioner                              kube-system
	43102affada64       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   5e28cd90a62ae       kubernetes-dashboard-8694d4445c-cqflv            kubernetes-dashboard
	531d15892fc05       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   8259cb9a26fa6       busybox                                          default
	ac519663e98ff       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           48 seconds ago      Running             coredns                     1                   e312838c20b69       coredns-5dd5756b68-wr2cl                         kube-system
	68fd186fabb99       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           48 seconds ago      Exited              storage-provisioner         1                   d47139f4221f5       storage-provisioner                              kube-system
	52aa628b40d1b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           48 seconds ago      Running             kube-proxy                  1                   5241ce4cdbc98       kube-proxy-gdm8d                                 kube-system
	cf17b98ff46bb       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           48 seconds ago      Running             kindnet-cni                 1                   1c9974618a9a1       kindnet-rwfkz                                    kube-system
	39a8d4c108d73       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           54 seconds ago      Running             kube-scheduler              1                   41acfda192068       kube-scheduler-old-k8s-version-512867            kube-system
	4500ff662d9c4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           54 seconds ago      Running             etcd                        1                   1b4cac6dc255e       etcd-old-k8s-version-512867                      kube-system
	0a77089c3baf8       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           54 seconds ago      Running             kube-apiserver              1                   9538b5bdc8f53       kube-apiserver-old-k8s-version-512867            kube-system
	946047cfc9104       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           54 seconds ago      Running             kube-controller-manager     1                   eeb67509d8d00       kube-controller-manager-old-k8s-version-512867   kube-system
	
	
	==> coredns [ac519663e98ff88103f47d543ecf11f3edaef9bbf0f16273dd7dbd903f11025a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55429 - 36971 "HINFO IN 6050377229590491029.8308923331211008531. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056474247s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-512867
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-512867
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=old-k8s-version-512867
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_52_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:52:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-512867
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:53:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:53:45 +0000   Mon, 29 Dec 2025 07:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:53:45 +0000   Mon, 29 Dec 2025 07:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:53:45 +0000   Mon, 29 Dec 2025 07:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:53:45 +0000   Mon, 29 Dec 2025 07:52:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-512867
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                37780f38-62fe-49f0-844c-77007ce7ace0
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-5dd5756b68-wr2cl                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     103s
	  kube-system                 etcd-old-k8s-version-512867                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         119s
	  kube-system                 kindnet-rwfkz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-512867             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-old-k8s-version-512867    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-gdm8d                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-512867             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-z7mqw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-cqflv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-512867 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     116s                 kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    116s                 kubelet          Node old-k8s-version-512867 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  116s                 kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           104s                 node-controller  Node old-k8s-version-512867 event: Registered Node old-k8s-version-512867 in Controller
	  Normal  NodeReady                89s                  kubelet          Node old-k8s-version-512867 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node old-k8s-version-512867 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                  node-controller  Node old-k8s-version-512867 event: Registered Node old-k8s-version-512867 in Controller
	
	
	==> dmesg <==
	[Dec29 07:24] overlayfs: idmapped layers are currently not supported
	[Dec29 07:25] overlayfs: idmapped layers are currently not supported
	[Dec29 07:26] overlayfs: idmapped layers are currently not supported
	[Dec29 07:29] overlayfs: idmapped layers are currently not supported
	[Dec29 07:30] overlayfs: idmapped layers are currently not supported
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4500ff662d9c4502d7b5756775084ffe6a9bc24207b53bd5b3de45a409cc5b3b] <==
	{"level":"info","ts":"2025-12-29T07:53:10.759618Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:53:10.704984Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-29T07:53:10.705238Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:53:10.75967Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:53:10.759685Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:53:10.705574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-29T07:53:10.760227Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-29T07:53:10.760326Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:53:10.76036Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:53:10.759568Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:53:10.759602Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:53:12.086914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:53:12.087044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:53:12.087098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:53:12.087139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:53:12.087173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:53:12.087216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:53:12.087251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:53:12.097044Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-512867 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:53:12.097222Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:53:12.098327Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:53:12.10088Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:53:12.118285Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-29T07:53:12.100948Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:53:12.136978Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 07:54:05 up  4:36,  0 user,  load average: 1.35, 1.78, 2.29
	Linux old-k8s-version-512867 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cf17b98ff46bb1bafa413e0da3f2d55d6ac9e3e13454e07df58fdb87b249226e] <==
	I1229 07:53:16.260354       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:53:16.260610       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:53:16.260732       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:53:16.260749       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:53:16.260757       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:53:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:53:16.455672       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:53:16.455689       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:53:16.455698       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:53:16.455986       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1229 07:53:46.455388       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1229 07:53:46.456764       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1229 07:53:46.456993       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1229 07:53:46.457224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1229 07:53:47.755856       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:53:47.755896       1 metrics.go:72] Registering metrics
	I1229 07:53:47.755950       1 controller.go:711] "Syncing nftables rules"
	I1229 07:53:56.461830       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:53:56.461870       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0a77089c3baf8e5da671cb7347c1a4bf5ba26ae15659db5260e0ee7f08a73401] <==
	I1229 07:53:14.700749       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 07:53:14.872258       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:53:14.880552       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1229 07:53:14.880627       1 aggregator.go:166] initial CRD sync complete...
	I1229 07:53:14.880635       1 autoregister_controller.go:141] Starting autoregister controller
	I1229 07:53:14.880640       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:53:14.880647       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:53:14.903122       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1229 07:53:14.909272       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1229 07:53:14.909715       1 shared_informer.go:318] Caches are synced for configmaps
	I1229 07:53:14.910655       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:53:14.954599       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1229 07:53:14.954624       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1229 07:53:14.958277       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1229 07:53:15.594908       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1229 07:53:16.770354       1 controller.go:624] quota admission added evaluator for: namespaces
	I1229 07:53:16.843045       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1229 07:53:16.873047       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:53:16.892380       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:53:16.904042       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1229 07:53:16.956647       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.117.181"}
	I1229 07:53:16.975663       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.55.144"}
	I1229 07:53:27.814622       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:53:27.966952       1 controller.go:624] quota admission added evaluator for: endpoints
	I1229 07:53:28.018076       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [946047cfc91045ea55c69cede1ead96528f22108c6c76f45136fa6565ed7d09d] <==
	I1229 07:53:28.031457       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1229 07:53:28.077747       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="320.252246ms"
	I1229 07:53:28.078053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.632µs"
	I1229 07:53:28.083731       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-z7mqw"
	I1229 07:53:28.087868       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-cqflv"
	I1229 07:53:28.100211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.525042ms"
	I1229 07:53:28.105106       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.280238ms"
	I1229 07:53:28.120470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.238873ms"
	I1229 07:53:28.120809       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.948µs"
	I1229 07:53:28.126432       1 shared_informer.go:318] Caches are synced for garbage collector
	I1229 07:53:28.126971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="84.538µs"
	I1229 07:53:28.137829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="37.482487ms"
	I1229 07:53:28.150730       1 shared_informer.go:318] Caches are synced for garbage collector
	I1229 07:53:28.150764       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1229 07:53:28.164617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="26.721753ms"
	I1229 07:53:28.164728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.179µs"
	I1229 07:53:33.825649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.401163ms"
	I1229 07:53:33.825734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.463µs"
	I1229 07:53:37.827810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.315µs"
	I1229 07:53:38.830092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.828µs"
	I1229 07:53:39.828360       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.86µs"
	I1229 07:53:48.039686       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.471945ms"
	I1229 07:53:48.039921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.33µs"
	I1229 07:53:48.867151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.243µs"
	I1229 07:53:58.421421       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.309µs"
	
	
	==> kube-proxy [52aa628b40d1b389c30840509ea0f999ff48964de6ba998a2f49e58559f8d12c] <==
	I1229 07:53:16.343465       1 server_others.go:69] "Using iptables proxy"
	I1229 07:53:16.381423       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1229 07:53:16.489382       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:53:16.500220       1 server_others.go:152] "Using iptables Proxier"
	I1229 07:53:16.500338       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1229 07:53:16.500381       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1229 07:53:16.500443       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1229 07:53:16.500705       1 server.go:846] "Version info" version="v1.28.0"
	I1229 07:53:16.501091       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:53:16.501875       1 config.go:188] "Starting service config controller"
	I1229 07:53:16.501974       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1229 07:53:16.502047       1 config.go:97] "Starting endpoint slice config controller"
	I1229 07:53:16.502077       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1229 07:53:16.515645       1 config.go:315] "Starting node config controller"
	I1229 07:53:16.515751       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1229 07:53:16.604420       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1229 07:53:16.604471       1 shared_informer.go:318] Caches are synced for service config
	I1229 07:53:16.616429       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [39a8d4c108d73b046c789c0e818948e14860be2e2e26ee58c79711385c65a4e9] <==
	I1229 07:53:12.508846       1 serving.go:348] Generated self-signed cert in-memory
	W1229 07:53:14.649397       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:53:14.649433       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:53:14.649445       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:53:14.649453       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:53:14.858443       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1229 07:53:14.858483       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:53:14.870884       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:53:14.877390       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1229 07:53:14.877556       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1229 07:53:14.878102       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1229 07:53:14.978336       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: I1229 07:53:28.095975     792 topology_manager.go:215] "Topology Admit Handler" podUID="22312289-ae02-4dfe-853e-a5e86e53d038" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-cqflv"
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: I1229 07:53:28.100265     792 topology_manager.go:215] "Topology Admit Handler" podUID="ef6533dc-bd6a-4d38-8551-7232da63ac67" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-z7mqw"
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: I1229 07:53:28.260641     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2mpt\" (UniqueName: \"kubernetes.io/projected/ef6533dc-bd6a-4d38-8551-7232da63ac67-kube-api-access-h2mpt\") pod \"dashboard-metrics-scraper-5f989dc9cf-z7mqw\" (UID: \"ef6533dc-bd6a-4d38-8551-7232da63ac67\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw"
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: I1229 07:53:28.260701     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjcf9\" (UniqueName: \"kubernetes.io/projected/22312289-ae02-4dfe-853e-a5e86e53d038-kube-api-access-wjcf9\") pod \"kubernetes-dashboard-8694d4445c-cqflv\" (UID: \"22312289-ae02-4dfe-853e-a5e86e53d038\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cqflv"
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: I1229 07:53:28.260731     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ef6533dc-bd6a-4d38-8551-7232da63ac67-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-z7mqw\" (UID: \"ef6533dc-bd6a-4d38-8551-7232da63ac67\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw"
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: I1229 07:53:28.260758     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/22312289-ae02-4dfe-853e-a5e86e53d038-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-cqflv\" (UID: \"22312289-ae02-4dfe-853e-a5e86e53d038\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cqflv"
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: W1229 07:53:28.441884     792 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/crio-5e28cd90a62aeb2d4d142c66787c15a9294e583d0c7b57901b4ad190b1b0c43a WatchSource:0}: Error finding container 5e28cd90a62aeb2d4d142c66787c15a9294e583d0c7b57901b4ad190b1b0c43a: Status 404 returned error can't find the container with id 5e28cd90a62aeb2d4d142c66787c15a9294e583d0c7b57901b4ad190b1b0c43a
	Dec 29 07:53:33 old-k8s-version-512867 kubelet[792]: I1229 07:53:33.815138     792 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cqflv" podStartSLOduration=1.2941749790000001 podCreationTimestamp="2025-12-29 07:53:28 +0000 UTC" firstStartedPulling="2025-12-29 07:53:28.449838484 +0000 UTC m=+19.074227888" lastFinishedPulling="2025-12-29 07:53:32.970703933 +0000 UTC m=+23.595093337" observedRunningTime="2025-12-29 07:53:33.813970695 +0000 UTC m=+24.438360115" watchObservedRunningTime="2025-12-29 07:53:33.815040428 +0000 UTC m=+24.439429840"
	Dec 29 07:53:37 old-k8s-version-512867 kubelet[792]: I1229 07:53:37.804750     792 scope.go:117] "RemoveContainer" containerID="be2506d276754908c867bd8059d56779bfd827c725bd05ea2dc2ee29ec51fbb4"
	Dec 29 07:53:38 old-k8s-version-512867 kubelet[792]: I1229 07:53:38.808884     792 scope.go:117] "RemoveContainer" containerID="be2506d276754908c867bd8059d56779bfd827c725bd05ea2dc2ee29ec51fbb4"
	Dec 29 07:53:38 old-k8s-version-512867 kubelet[792]: I1229 07:53:38.809189     792 scope.go:117] "RemoveContainer" containerID="c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2"
	Dec 29 07:53:38 old-k8s-version-512867 kubelet[792]: E1229 07:53:38.809716     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z7mqw_kubernetes-dashboard(ef6533dc-bd6a-4d38-8551-7232da63ac67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw" podUID="ef6533dc-bd6a-4d38-8551-7232da63ac67"
	Dec 29 07:53:39 old-k8s-version-512867 kubelet[792]: I1229 07:53:39.812971     792 scope.go:117] "RemoveContainer" containerID="c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2"
	Dec 29 07:53:39 old-k8s-version-512867 kubelet[792]: E1229 07:53:39.813260     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z7mqw_kubernetes-dashboard(ef6533dc-bd6a-4d38-8551-7232da63ac67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw" podUID="ef6533dc-bd6a-4d38-8551-7232da63ac67"
	Dec 29 07:53:46 old-k8s-version-512867 kubelet[792]: I1229 07:53:46.838624     792 scope.go:117] "RemoveContainer" containerID="68fd186fabb99a7faa9f17108a6cf9ed8b90d645a067f806eb0d6aed359f08ac"
	Dec 29 07:53:48 old-k8s-version-512867 kubelet[792]: I1229 07:53:48.402725     792 scope.go:117] "RemoveContainer" containerID="c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2"
	Dec 29 07:53:48 old-k8s-version-512867 kubelet[792]: I1229 07:53:48.847415     792 scope.go:117] "RemoveContainer" containerID="c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2"
	Dec 29 07:53:48 old-k8s-version-512867 kubelet[792]: I1229 07:53:48.847874     792 scope.go:117] "RemoveContainer" containerID="b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3"
	Dec 29 07:53:48 old-k8s-version-512867 kubelet[792]: E1229 07:53:48.848168     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z7mqw_kubernetes-dashboard(ef6533dc-bd6a-4d38-8551-7232da63ac67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw" podUID="ef6533dc-bd6a-4d38-8551-7232da63ac67"
	Dec 29 07:53:58 old-k8s-version-512867 kubelet[792]: I1229 07:53:58.402421     792 scope.go:117] "RemoveContainer" containerID="b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3"
	Dec 29 07:53:58 old-k8s-version-512867 kubelet[792]: E1229 07:53:58.402748     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z7mqw_kubernetes-dashboard(ef6533dc-bd6a-4d38-8551-7232da63ac67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw" podUID="ef6533dc-bd6a-4d38-8551-7232da63ac67"
	Dec 29 07:54:01 old-k8s-version-512867 kubelet[792]: I1229 07:54:01.754629     792 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 29 07:54:01 old-k8s-version-512867 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:54:01 old-k8s-version-512867 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:54:01 old-k8s-version-512867 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [43102affada64f3948914ab09a70e4608524e9afb6ab6d838c6ad810b7638c93] <==
	2025/12/29 07:53:33 Using namespace: kubernetes-dashboard
	2025/12/29 07:53:33 Using in-cluster config to connect to apiserver
	2025/12/29 07:53:33 Using secret token for csrf signing
	2025/12/29 07:53:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:53:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:53:33 Successful initial request to the apiserver, version: v1.28.0
	2025/12/29 07:53:33 Generating JWE encryption key
	2025/12/29 07:53:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:53:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:53:33 Initializing JWE encryption key from synchronized object
	2025/12/29 07:53:33 Creating in-cluster Sidecar client
	2025/12/29 07:53:33 Serving insecurely on HTTP port: 9090
	2025/12/29 07:53:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:54:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:53:33 Starting overwatch
	
	
	==> storage-provisioner [68fd186fabb99a7faa9f17108a6cf9ed8b90d645a067f806eb0d6aed359f08ac] <==
	I1229 07:53:16.130629       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:53:46.137608       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ec432e862408cfa12c66b8854611c25341511d99e01a67f249f6e73b55a781a5] <==
	I1229 07:53:46.885359       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:53:46.898956       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:53:46.899030       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 07:54:04.299291       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:54:04.301292       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-512867_b8e69629-30c1-449b-95bf-5847e0a7210a!
	I1229 07:54:04.303099       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"705bfbd5-8239-46ab-90c2-30c511af11a6", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-512867_b8e69629-30c1-449b-95bf-5847e0a7210a became leader
	I1229 07:54:04.402067       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-512867_b8e69629-30c1-449b-95bf-5847e0a7210a!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-512867 -n old-k8s-version-512867
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-512867 -n old-k8s-version-512867: exit status 2 (359.648366ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-512867 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-512867
helpers_test.go:244: (dbg) docker inspect old-k8s-version-512867:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f",
	        "Created": "2025-12-29T07:51:42.199419041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 923884,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:53:02.588555297Z",
	            "FinishedAt": "2025-12-29T07:53:01.75247634Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/hostname",
	        "HostsPath": "/var/lib/docker/containers/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/hosts",
	        "LogPath": "/var/lib/docker/containers/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f-json.log",
	        "Name": "/old-k8s-version-512867",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-512867:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-512867",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f",
	                "LowerDir": "/var/lib/docker/overlay2/03c393ff5bda96028dbc074a1b051ac8e6752eef90d48a1136552564119804c0-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03c393ff5bda96028dbc074a1b051ac8e6752eef90d48a1136552564119804c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03c393ff5bda96028dbc074a1b051ac8e6752eef90d48a1136552564119804c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03c393ff5bda96028dbc074a1b051ac8e6752eef90d48a1136552564119804c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-512867",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-512867/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-512867",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-512867",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-512867",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e27959998c39c4eaaca926d494ccccadf6d22bbdd8bbd37ffa7f68c44216cd5",
	            "SandboxKey": "/var/run/docker/netns/7e27959998c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-512867": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:87:0d:6b:2f:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "41cc9a6bc794235538e736c190e047c9f1df45f342ddeffb05ec66c680fed733",
	                    "EndpointID": "e2dde20e147f6bbbe546be5ab8fae83abf24ea4128cd4f5de704f5f04d0b2f76",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-512867",
	                        "f522ea5fbf77"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-512867 -n old-k8s-version-512867
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-512867 -n old-k8s-version-512867: exit status 2 (395.2394ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-512867 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-512867 logs -n 25: (1.291408407s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-095300 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo containerd config dump                                                                                                                                                                                                  │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo crio config                                                                                                                                                                                                             │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ delete  │ -p cilium-095300                                                                                                                                                                                                                              │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │ 29 Dec 25 07:45 UTC │
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │ 29 Dec 25 07:46 UTC │
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ delete  │ -p cert-expiration-853545                                                                                                                                                                                                                     │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ start   │ -p force-systemd-flag-122604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-122604 │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │                     │
	│ delete  │ -p force-systemd-env-002562                                                                                                                                                                                                                   │ force-systemd-env-002562  │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p cert-options-600463 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ cert-options-600463 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ -p cert-options-600463 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ delete  │ -p cert-options-600463                                                                                                                                                                                                                        │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-512867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │                     │
	│ stop    │ -p old-k8s-version-512867 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │ 29 Dec 25 07:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-512867 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ image   │ old-k8s-version-512867 image list --format=json                                                                                                                                                                                               │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ pause   │ -p old-k8s-version-512867 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:53:02
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:53:02.303868  923761 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:53:02.304068  923761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:53:02.304096  923761 out.go:374] Setting ErrFile to fd 2...
	I1229 07:53:02.304117  923761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:53:02.304403  923761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:53:02.304872  923761 out.go:368] Setting JSON to false
	I1229 07:53:02.305787  923761 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16534,"bootTime":1766978248,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:53:02.305887  923761 start.go:143] virtualization:  
	I1229 07:53:02.311032  923761 out.go:179] * [old-k8s-version-512867] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:53:02.314136  923761 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:53:02.314277  923761 notify.go:221] Checking for updates...
	I1229 07:53:02.320176  923761 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:53:02.323166  923761 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:53:02.326101  923761 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:53:02.329506  923761 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:53:02.332466  923761 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:53:02.335714  923761 config.go:182] Loaded profile config "old-k8s-version-512867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:53:02.339156  923761 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1229 07:53:02.342015  923761 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:53:02.376196  923761 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:53:02.376331  923761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:53:02.434735  923761 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:53:02.425265587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:53:02.434858  923761 docker.go:319] overlay module found
	I1229 07:53:02.438041  923761 out.go:179] * Using the docker driver based on existing profile
	I1229 07:53:02.440932  923761 start.go:309] selected driver: docker
	I1229 07:53:02.440957  923761 start.go:928] validating driver "docker" against &{Name:old-k8s-version-512867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:53:02.441067  923761 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:53:02.441783  923761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:53:02.501062  923761 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:53:02.4912873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:53:02.501405  923761 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:53:02.501443  923761 cni.go:84] Creating CNI manager for ""
	I1229 07:53:02.501497  923761 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:53:02.501541  923761 start.go:353] cluster config:
	{Name:old-k8s-version-512867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:53:02.506694  923761 out.go:179] * Starting "old-k8s-version-512867" primary control-plane node in "old-k8s-version-512867" cluster
	I1229 07:53:02.509567  923761 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:53:02.512474  923761 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:53:02.515476  923761 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 07:53:02.515533  923761 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:53:02.515547  923761 cache.go:65] Caching tarball of preloaded images
	I1229 07:53:02.515552  923761 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:53:02.515651  923761 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:53:02.515662  923761 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1229 07:53:02.515778  923761 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/config.json ...
	I1229 07:53:02.535356  923761 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:53:02.535376  923761 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:53:02.535390  923761 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:53:02.535422  923761 start.go:360] acquireMachinesLock for old-k8s-version-512867: {Name:mke66a3bfc4d2544e7318e37a0e696241f8d3b35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:53:02.535472  923761 start.go:364] duration metric: took 34.035µs to acquireMachinesLock for "old-k8s-version-512867"
	I1229 07:53:02.535498  923761 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:53:02.535503  923761 fix.go:54] fixHost starting: 
	I1229 07:53:02.535760  923761 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:53:02.552752  923761 fix.go:112] recreateIfNeeded on old-k8s-version-512867: state=Stopped err=<nil>
	W1229 07:53:02.552813  923761 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:53:02.556145  923761 out.go:252] * Restarting existing docker container for "old-k8s-version-512867" ...
	I1229 07:53:02.556238  923761 cli_runner.go:164] Run: docker start old-k8s-version-512867
	I1229 07:53:02.807858  923761 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:53:02.838309  923761 kic.go:430] container "old-k8s-version-512867" state is running.
	I1229 07:53:02.838707  923761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-512867
	I1229 07:53:02.859959  923761 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/config.json ...
	I1229 07:53:02.860179  923761 machine.go:94] provisionDockerMachine start ...
	I1229 07:53:02.860234  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:02.881489  923761 main.go:144] libmachine: Using SSH client type: native
	I1229 07:53:02.881809  923761 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I1229 07:53:02.881817  923761 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:53:02.882502  923761 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1229 07:53:06.044725  923761 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-512867
	
	I1229 07:53:06.044843  923761 ubuntu.go:182] provisioning hostname "old-k8s-version-512867"
	I1229 07:53:06.044934  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:06.063856  923761 main.go:144] libmachine: Using SSH client type: native
	I1229 07:53:06.064189  923761 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I1229 07:53:06.064202  923761 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-512867 && echo "old-k8s-version-512867" | sudo tee /etc/hostname
	I1229 07:53:06.226551  923761 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-512867
	
	I1229 07:53:06.226632  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:06.244939  923761 main.go:144] libmachine: Using SSH client type: native
	I1229 07:53:06.245263  923761 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I1229 07:53:06.245285  923761 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-512867' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-512867/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-512867' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:53:06.397228  923761 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:53:06.397319  923761 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:53:06.397374  923761 ubuntu.go:190] setting up certificates
	I1229 07:53:06.397405  923761 provision.go:84] configureAuth start
	I1229 07:53:06.397509  923761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-512867
	I1229 07:53:06.414732  923761 provision.go:143] copyHostCerts
	I1229 07:53:06.414807  923761 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:53:06.414825  923761 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:53:06.414908  923761 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:53:06.415016  923761 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:53:06.415022  923761 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:53:06.415049  923761 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:53:06.415106  923761 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:53:06.415111  923761 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:53:06.415134  923761 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:53:06.415236  923761 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-512867 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-512867]
	I1229 07:53:06.740444  923761 provision.go:177] copyRemoteCerts
	I1229 07:53:06.740563  923761 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:53:06.740619  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:06.758956  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:06.864911  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:53:06.882973  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1229 07:53:06.901435  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:53:06.919886  923761 provision.go:87] duration metric: took 522.443947ms to configureAuth
	I1229 07:53:06.919916  923761 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:53:06.920162  923761 config.go:182] Loaded profile config "old-k8s-version-512867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:53:06.920278  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:06.937558  923761 main.go:144] libmachine: Using SSH client type: native
	I1229 07:53:06.937867  923761 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I1229 07:53:06.937892  923761 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:53:07.295849  923761 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:53:07.295875  923761 machine.go:97] duration metric: took 4.435687239s to provisionDockerMachine
	I1229 07:53:07.295895  923761 start.go:293] postStartSetup for "old-k8s-version-512867" (driver="docker")
	I1229 07:53:07.295907  923761 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:53:07.295977  923761 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:53:07.296025  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:07.317498  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:07.429238  923761 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:53:07.432927  923761 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:53:07.432957  923761 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:53:07.432969  923761 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:53:07.433024  923761 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:53:07.433102  923761 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:53:07.433210  923761 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:53:07.442301  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:53:07.463382  923761 start.go:296] duration metric: took 167.470934ms for postStartSetup
	I1229 07:53:07.463462  923761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:53:07.463515  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:07.487352  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:07.597826  923761 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:53:07.602632  923761 fix.go:56] duration metric: took 5.067120778s for fixHost
	I1229 07:53:07.602662  923761 start.go:83] releasing machines lock for "old-k8s-version-512867", held for 5.06717933s
	I1229 07:53:07.602762  923761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-512867
	I1229 07:53:07.620748  923761 ssh_runner.go:195] Run: cat /version.json
	I1229 07:53:07.620828  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:07.621142  923761 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:53:07.621209  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:07.641976  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:07.643674  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:07.836600  923761 ssh_runner.go:195] Run: systemctl --version
	I1229 07:53:07.843262  923761 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:53:07.883760  923761 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:53:07.888483  923761 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:53:07.888583  923761 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:53:07.896862  923761 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:53:07.896892  923761 start.go:496] detecting cgroup driver to use...
	I1229 07:53:07.896929  923761 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:53:07.896979  923761 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:53:07.912529  923761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:53:07.926038  923761 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:53:07.926102  923761 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:53:07.941563  923761 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:53:07.954724  923761 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:53:08.070894  923761 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:53:08.198322  923761 docker.go:234] disabling docker service ...
	I1229 07:53:08.198469  923761 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:53:08.215146  923761 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:53:08.229573  923761 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:53:08.356978  923761 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:53:08.478427  923761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:53:08.491252  923761 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:53:08.504988  923761 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1229 07:53:08.505076  923761 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.513956  923761 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:53:08.514048  923761 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.523659  923761 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.532711  923761 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.541905  923761 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:53:08.549908  923761 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.558735  923761 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.567260  923761 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:53:08.576054  923761 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:53:08.583956  923761 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:53:08.591704  923761 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:53:08.703580  923761 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:53:08.891315  923761 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:53:08.891403  923761 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:53:08.895225  923761 start.go:574] Will wait 60s for crictl version
	I1229 07:53:08.895284  923761 ssh_runner.go:195] Run: which crictl
	I1229 07:53:08.898752  923761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:53:08.928142  923761 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:53:08.928239  923761 ssh_runner.go:195] Run: crio --version
	I1229 07:53:08.968526  923761 ssh_runner.go:195] Run: crio --version
	I1229 07:53:09.004414  923761 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.35.0 ...
	I1229 07:53:09.007685  923761 cli_runner.go:164] Run: docker network inspect old-k8s-version-512867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:53:09.025021  923761 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:53:09.029203  923761 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:53:09.038918  923761 kubeadm.go:884] updating cluster {Name:old-k8s-version-512867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:53:09.039040  923761 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 07:53:09.039091  923761 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:53:09.075356  923761 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:53:09.075377  923761 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:53:09.075433  923761 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:53:09.101870  923761 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:53:09.101899  923761 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:53:09.101907  923761 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1229 07:53:09.102008  923761 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-512867 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:53:09.102097  923761 ssh_runner.go:195] Run: crio config
	I1229 07:53:09.169201  923761 cni.go:84] Creating CNI manager for ""
	I1229 07:53:09.169227  923761 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:53:09.169248  923761 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:53:09.169297  923761 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-512867 NodeName:old-k8s-version-512867 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:53:09.169486  923761 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-512867"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:53:09.169601  923761 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1229 07:53:09.177667  923761 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:53:09.177751  923761 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:53:09.185180  923761 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1229 07:53:09.197578  923761 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:53:09.210297  923761 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1229 07:53:09.223110  923761 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:53:09.227157  923761 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:53:09.237003  923761 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:53:09.355975  923761 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:53:09.376812  923761 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867 for IP: 192.168.85.2
	I1229 07:53:09.376918  923761 certs.go:195] generating shared ca certs ...
	I1229 07:53:09.376961  923761 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:53:09.377280  923761 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:53:09.377437  923761 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:53:09.377537  923761 certs.go:257] generating profile certs ...
	I1229 07:53:09.377738  923761 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.key
	I1229 07:53:09.377940  923761 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.key.14700b57
	I1229 07:53:09.378079  923761 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.key
	I1229 07:53:09.378346  923761 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:53:09.378430  923761 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:53:09.378489  923761 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:53:09.378567  923761 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:53:09.378685  923761 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:53:09.378904  923761 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:53:09.379077  923761 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:53:09.380092  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:53:09.409445  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:53:09.435746  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:53:09.464410  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:53:09.491716  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1229 07:53:09.531766  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:53:09.566179  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:53:09.592620  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:53:09.617423  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:53:09.636630  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:53:09.656141  923761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:53:09.675718  923761 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:53:09.694021  923761 ssh_runner.go:195] Run: openssl version
	I1229 07:53:09.701080  923761 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:53:09.710851  923761 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:53:09.720846  923761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:53:09.725704  923761 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:53:09.725828  923761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:53:09.768625  923761 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:53:09.776738  923761 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:53:09.784663  923761 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:53:09.792709  923761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:53:09.796737  923761 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:53:09.796829  923761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:53:09.838055  923761 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:53:09.845901  923761 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:53:09.853368  923761 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:53:09.861059  923761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:53:09.864961  923761 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:53:09.865053  923761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:53:09.907356  923761 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:53:09.915023  923761 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:53:09.920176  923761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:53:09.961608  923761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:53:10.003591  923761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:53:10.064199  923761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:53:10.129414  923761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:53:10.203555  923761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:53:10.289514  923761 kubeadm.go:401] StartCluster: {Name:old-k8s-version-512867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-512867 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:53:10.289606  923761 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:53:10.289686  923761 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:53:10.336584  923761 cri.go:96] found id: "39a8d4c108d73b046c789c0e818948e14860be2e2e26ee58c79711385c65a4e9"
	I1229 07:53:10.336609  923761 cri.go:96] found id: "4500ff662d9c4502d7b5756775084ffe6a9bc24207b53bd5b3de45a409cc5b3b"
	I1229 07:53:10.336615  923761 cri.go:96] found id: "0a77089c3baf8e5da671cb7347c1a4bf5ba26ae15659db5260e0ee7f08a73401"
	I1229 07:53:10.336618  923761 cri.go:96] found id: "946047cfc91045ea55c69cede1ead96528f22108c6c76f45136fa6565ed7d09d"
	I1229 07:53:10.336628  923761 cri.go:96] found id: ""
	I1229 07:53:10.336676  923761 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:53:10.351590  923761 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:53:10Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:53:10.351671  923761 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:53:10.361855  923761 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:53:10.361878  923761 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:53:10.361930  923761 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:53:10.372459  923761 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:53:10.372934  923761 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-512867" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:53:10.373039  923761 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-739997/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-512867" cluster setting kubeconfig missing "old-k8s-version-512867" context setting]
	I1229 07:53:10.373324  923761 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:53:10.374713  923761 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:53:10.386773  923761 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1229 07:53:10.386808  923761 kubeadm.go:602] duration metric: took 24.924118ms to restartPrimaryControlPlane
	I1229 07:53:10.386819  923761 kubeadm.go:403] duration metric: took 97.316176ms to StartCluster
	I1229 07:53:10.386834  923761 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:53:10.386900  923761 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:53:10.387546  923761 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:53:10.387755  923761 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:53:10.388151  923761 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:53:10.388224  923761 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-512867"
	I1229 07:53:10.388237  923761 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-512867"
	W1229 07:53:10.388243  923761 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:53:10.388264  923761 host.go:66] Checking if "old-k8s-version-512867" exists ...
	I1229 07:53:10.388726  923761 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:53:10.389238  923761 config.go:182] Loaded profile config "old-k8s-version-512867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1229 07:53:10.389334  923761 addons.go:70] Setting dashboard=true in profile "old-k8s-version-512867"
	I1229 07:53:10.389361  923761 addons.go:239] Setting addon dashboard=true in "old-k8s-version-512867"
	W1229 07:53:10.389391  923761 addons.go:248] addon dashboard should already be in state true
	I1229 07:53:10.389434  923761 host.go:66] Checking if "old-k8s-version-512867" exists ...
	I1229 07:53:10.389865  923761 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:53:10.392576  923761 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-512867"
	I1229 07:53:10.392614  923761 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-512867"
	I1229 07:53:10.392799  923761 out.go:179] * Verifying Kubernetes components...
	I1229 07:53:10.393274  923761 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:53:10.395997  923761 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:53:10.446447  923761 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-512867"
	W1229 07:53:10.446469  923761 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:53:10.446493  923761 host.go:66] Checking if "old-k8s-version-512867" exists ...
	I1229 07:53:10.446911  923761 cli_runner.go:164] Run: docker container inspect old-k8s-version-512867 --format={{.State.Status}}
	I1229 07:53:10.451225  923761 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:53:10.457629  923761 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1229 07:53:10.457746  923761 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:53:10.460960  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:53:10.460987  923761 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:53:10.461055  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:10.461285  923761 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:53:10.461297  923761 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:53:10.461336  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:10.492524  923761 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:53:10.492545  923761 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:53:10.492606  923761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-512867
	I1229 07:53:10.508888  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:10.538144  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:10.541338  923761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/old-k8s-version-512867/id_rsa Username:docker}
	I1229 07:53:10.769268  923761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:53:10.776962  923761 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:53:10.813865  923761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:53:10.861816  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:53:10.861886  923761 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:53:10.950428  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:53:10.950493  923761 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:53:11.051662  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:53:11.051731  923761 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:53:11.097672  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:53:11.097738  923761 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:53:11.121337  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:53:11.121403  923761 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1229 07:53:11.146578  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:53:11.146645  923761 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:53:11.170206  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:53:11.170292  923761 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:53:11.190367  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:53:11.190444  923761 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:53:11.227925  923761 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:53:11.228004  923761 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:53:11.254148  923761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:53:16.371056  923761 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.601750549s)
	I1229 07:53:16.371115  923761 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.594132849s)
	I1229 07:53:16.371151  923761 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-512867" to be "Ready" ...
	I1229 07:53:16.371485  923761 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.557597023s)
	I1229 07:53:16.397571  923761 node_ready.go:49] node "old-k8s-version-512867" is "Ready"
	I1229 07:53:16.397596  923761 node_ready.go:38] duration metric: took 26.4331ms for node "old-k8s-version-512867" to be "Ready" ...
	I1229 07:53:16.397609  923761 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:53:16.397664  923761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:53:16.982870  923761 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.728627073s)
	I1229 07:53:16.982931  923761 api_server.go:72] duration metric: took 6.595140301s to wait for apiserver process to appear ...
	I1229 07:53:16.983106  923761 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:53:16.983132  923761 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:53:16.986122  923761 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-512867 addons enable metrics-server
	
	I1229 07:53:16.989064  923761 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1229 07:53:16.991536  923761 addons.go:530] duration metric: took 6.603383162s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1229 07:53:16.995332  923761 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1229 07:53:16.996876  923761 api_server.go:141] control plane version: v1.28.0
	I1229 07:53:16.996902  923761 api_server.go:131] duration metric: took 13.787762ms to wait for apiserver health ...
	I1229 07:53:16.996912  923761 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:53:17.001026  923761 system_pods.go:59] 8 kube-system pods found
	I1229 07:53:17.001072  923761 system_pods.go:61] "coredns-5dd5756b68-wr2cl" [38a5da82-9cb7-4f2a-8258-393b1cb27c8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:53:17.001082  923761 system_pods.go:61] "etcd-old-k8s-version-512867" [174e6131-3bb9-4c27-b831-a7dec9f07c17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:53:17.001089  923761 system_pods.go:61] "kindnet-rwfkz" [3700dc6c-45b9-4abc-a322-f283999cca24] Running
	I1229 07:53:17.001096  923761 system_pods.go:61] "kube-apiserver-old-k8s-version-512867" [2ff4594c-21b0-4a6b-be02-cadbcc4332ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:53:17.001103  923761 system_pods.go:61] "kube-controller-manager-old-k8s-version-512867" [ac43a8d0-cf03-43b8-9d59-5d1bc7938de5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:53:17.001108  923761 system_pods.go:61] "kube-proxy-gdm8d" [da0bed78-1180-482b-89a3-cccbfe791640] Running
	I1229 07:53:17.001115  923761 system_pods.go:61] "kube-scheduler-old-k8s-version-512867" [595f8c3c-4177-40a3-bd88-b0794f8ac76f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:53:17.001120  923761 system_pods.go:61] "storage-provisioner" [05d3f69a-6463-4fc2-b3f8-fb6ed350c50f] Running
	I1229 07:53:17.001128  923761 system_pods.go:74] duration metric: took 4.210362ms to wait for pod list to return data ...
	I1229 07:53:17.001139  923761 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:53:17.005256  923761 default_sa.go:45] found service account: "default"
	I1229 07:53:17.005341  923761 default_sa.go:55] duration metric: took 4.191039ms for default service account to be created ...
	I1229 07:53:17.005359  923761 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:53:17.009713  923761 system_pods.go:86] 8 kube-system pods found
	I1229 07:53:17.009750  923761 system_pods.go:89] "coredns-5dd5756b68-wr2cl" [38a5da82-9cb7-4f2a-8258-393b1cb27c8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:53:17.009761  923761 system_pods.go:89] "etcd-old-k8s-version-512867" [174e6131-3bb9-4c27-b831-a7dec9f07c17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:53:17.009768  923761 system_pods.go:89] "kindnet-rwfkz" [3700dc6c-45b9-4abc-a322-f283999cca24] Running
	I1229 07:53:17.009776  923761 system_pods.go:89] "kube-apiserver-old-k8s-version-512867" [2ff4594c-21b0-4a6b-be02-cadbcc4332ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:53:17.009799  923761 system_pods.go:89] "kube-controller-manager-old-k8s-version-512867" [ac43a8d0-cf03-43b8-9d59-5d1bc7938de5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:53:17.009807  923761 system_pods.go:89] "kube-proxy-gdm8d" [da0bed78-1180-482b-89a3-cccbfe791640] Running
	I1229 07:53:17.009815  923761 system_pods.go:89] "kube-scheduler-old-k8s-version-512867" [595f8c3c-4177-40a3-bd88-b0794f8ac76f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:53:17.009824  923761 system_pods.go:89] "storage-provisioner" [05d3f69a-6463-4fc2-b3f8-fb6ed350c50f] Running
	I1229 07:53:17.009832  923761 system_pods.go:126] duration metric: took 4.465757ms to wait for k8s-apps to be running ...
	I1229 07:53:17.009840  923761 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:53:17.009910  923761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:53:17.024112  923761 system_svc.go:56] duration metric: took 14.261971ms WaitForService to wait for kubelet
	I1229 07:53:17.024185  923761 kubeadm.go:587] duration metric: took 6.636394219s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:53:17.024222  923761 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:53:17.027508  923761 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:53:17.027589  923761 node_conditions.go:123] node cpu capacity is 2
	I1229 07:53:17.027617  923761 node_conditions.go:105] duration metric: took 3.355966ms to run NodePressure ...
	I1229 07:53:17.027658  923761 start.go:242] waiting for startup goroutines ...
	I1229 07:53:17.027683  923761 start.go:247] waiting for cluster config update ...
	I1229 07:53:17.027711  923761 start.go:256] writing updated cluster config ...
	I1229 07:53:17.028049  923761 ssh_runner.go:195] Run: rm -f paused
	I1229 07:53:17.032335  923761 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:53:17.037102  923761 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-wr2cl" in "kube-system" namespace to be "Ready" or be gone ...
	W1229 07:53:19.043232  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:21.043800  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:23.542840  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:25.543392  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:27.544137  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:30.047169  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:32.543854  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:35.042861  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:37.044403  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:39.543261  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:41.543575  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:44.043095  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	W1229 07:53:46.543414  923761 pod_ready.go:104] pod "coredns-5dd5756b68-wr2cl" is not "Ready", error: <nil>
	I1229 07:53:48.046559  923761 pod_ready.go:94] pod "coredns-5dd5756b68-wr2cl" is "Ready"
	I1229 07:53:48.046585  923761 pod_ready.go:86] duration metric: took 31.009453304s for pod "coredns-5dd5756b68-wr2cl" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.051019  923761 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.060329  923761 pod_ready.go:94] pod "etcd-old-k8s-version-512867" is "Ready"
	I1229 07:53:48.060361  923761 pod_ready.go:86] duration metric: took 9.31473ms for pod "etcd-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.064569  923761 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.070247  923761 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-512867" is "Ready"
	I1229 07:53:48.070275  923761 pod_ready.go:86] duration metric: took 5.676678ms for pod "kube-apiserver-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.073991  923761 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.241075  923761 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-512867" is "Ready"
	I1229 07:53:48.241155  923761 pod_ready.go:86] duration metric: took 167.134535ms for pod "kube-controller-manager-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.442297  923761 pod_ready.go:83] waiting for pod "kube-proxy-gdm8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:48.841420  923761 pod_ready.go:94] pod "kube-proxy-gdm8d" is "Ready"
	I1229 07:53:48.841452  923761 pod_ready.go:86] duration metric: took 399.066413ms for pod "kube-proxy-gdm8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:49.042305  923761 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:49.443203  923761 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-512867" is "Ready"
	I1229 07:53:49.443227  923761 pod_ready.go:86] duration metric: took 400.894489ms for pod "kube-scheduler-old-k8s-version-512867" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:53:49.443240  923761 pod_ready.go:40] duration metric: took 32.410827636s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:53:49.514601  923761 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1229 07:53:49.517822  923761 out.go:203] 
	W1229 07:53:49.520763  923761 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1229 07:53:49.523707  923761 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:53:49.526764  923761 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-512867" cluster and "default" namespace by default
	I1229 07:53:52.201509  913056 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000842749s
	I1229 07:53:52.201542  913056 kubeadm.go:319] 
	I1229 07:53:52.201600  913056 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:53:52.201638  913056 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:53:52.201746  913056 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:53:52.201755  913056 kubeadm.go:319] 
	I1229 07:53:52.201859  913056 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:53:52.201895  913056 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:53:52.201930  913056 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:53:52.201938  913056 kubeadm.go:319] 
	I1229 07:53:52.208144  913056 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:53:52.208619  913056 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:53:52.208743  913056 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:53:52.209029  913056 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1229 07:53:52.209041  913056 kubeadm.go:319] 
	I1229 07:53:52.209109  913056 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1229 07:53:52.209252  913056 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-122604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-122604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000842749s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:53:52.209331  913056 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1229 07:53:52.622384  913056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:53:52.635442  913056 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:53:52.635506  913056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:53:52.644553  913056 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:53:52.644574  913056 kubeadm.go:158] found existing configuration files:
	
	I1229 07:53:52.644655  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:53:52.652636  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:53:52.652702  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:53:52.660083  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:53:52.668000  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:53:52.668064  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:53:52.675501  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:53:52.683499  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:53:52.683585  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:53:52.691378  913056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:53:52.699077  913056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:53:52.699141  913056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:53:52.706928  913056 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:53:52.743355  913056 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:53:52.743680  913056 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:53:52.820188  913056 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:53:52.820265  913056 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:53:52.820300  913056 kubeadm.go:319] OS: Linux
	I1229 07:53:52.820355  913056 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:53:52.820409  913056 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:53:52.820460  913056 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:53:52.820512  913056 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:53:52.820563  913056 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:53:52.820616  913056 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:53:52.820665  913056 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:53:52.820716  913056 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:53:52.820764  913056 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:53:52.898967  913056 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:53:52.899087  913056 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:53:52.899183  913056 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:53:52.909202  913056 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:53:52.915184  913056 out.go:252]   - Generating certificates and keys ...
	I1229 07:53:52.915280  913056 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:53:52.915366  913056 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:53:52.915444  913056 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 07:53:52.915505  913056 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1229 07:53:52.915575  913056 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1229 07:53:52.915628  913056 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1229 07:53:52.915691  913056 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1229 07:53:52.915753  913056 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1229 07:53:52.915833  913056 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 07:53:52.915906  913056 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 07:53:52.915943  913056 kubeadm.go:319] [certs] Using the existing "sa" key
	I1229 07:53:52.916000  913056 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:53:53.163034  913056 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:53:53.260276  913056 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:53:53.509146  913056 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:53:53.923256  913056 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:53:54.338003  913056 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:53:54.338712  913056 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:53:54.341424  913056 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:53:54.344645  913056 out.go:252]   - Booting up control plane ...
	I1229 07:53:54.344752  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:53:54.344850  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:53:54.344919  913056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:53:54.362719  913056 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:53:54.362842  913056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:53:54.370734  913056 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:53:54.371185  913056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:53:54.371512  913056 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:53:54.549211  913056 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:53:54.549333  913056 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Dec 29 07:53:46 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:46.868936284Z" level=info msg="Started container" PID=1679 containerID=ec432e862408cfa12c66b8854611c25341511d99e01a67f249f6e73b55a781a5 description=kube-system/storage-provisioner/storage-provisioner id=079de392-b02f-4dd4-9dfe-a18dd70fc5b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d47139f4221f5d27794fc0238eb7d1740541674f2e2b8d97e19a9ebae4894bea
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.403353583Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=450b1cfa-f8c5-4140-abdc-3b2e29ab40e3 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.404238429Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=077f73a8-b5c3-4dd6-9150-ba7cb672321e name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.405269681Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw/dashboard-metrics-scraper" id=0a47ace0-5958-462f-b680-1229d523963c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.405392816Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.413799987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.415215101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.432081112Z" level=info msg="Created container b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw/dashboard-metrics-scraper" id=0a47ace0-5958-462f-b680-1229d523963c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.433168659Z" level=info msg="Starting container: b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3" id=b00bdf1b-c264-458f-a4a6-a919a4e98adb name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.434941846Z" level=info msg="Started container" PID=1694 containerID=b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw/dashboard-metrics-scraper id=b00bdf1b-c264-458f-a4a6-a919a4e98adb name=/runtime.v1.RuntimeService/StartContainer sandboxID=022f09fccb461fa5d061c589a1f3fd1b062719138e5c00e02d1b006aae4efb56
	Dec 29 07:53:48 old-k8s-version-512867 conmon[1692]: conmon b7b38088b9bb759e8bf9 <ninfo>: container 1694 exited with status 1
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.850810533Z" level=info msg="Removing container: c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2" id=482274e3-5107-43af-930d-993403faae4d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.858973231Z" level=info msg="Error loading conmon cgroup of container c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2: cgroup deleted" id=482274e3-5107-43af-930d-993403faae4d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:53:48 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:48.861687405Z" level=info msg="Removed container c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw/dashboard-metrics-scraper" id=482274e3-5107-43af-930d-993403faae4d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.471844989Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.471882117Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.476223911Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.476259981Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.480578201Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.480616618Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.485019508Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.485053214Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.485079594Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.48927239Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:53:56 old-k8s-version-512867 crio[662]: time="2025-12-29T07:53:56.489305876Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	b7b38088b9bb7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   022f09fccb461       dashboard-metrics-scraper-5f989dc9cf-z7mqw       kubernetes-dashboard
	ec432e862408c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   d47139f4221f5       storage-provisioner                              kube-system
	43102affada64       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago      Running             kubernetes-dashboard        0                   5e28cd90a62ae       kubernetes-dashboard-8694d4445c-cqflv            kubernetes-dashboard
	531d15892fc05       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   8259cb9a26fa6       busybox                                          default
	ac519663e98ff       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           50 seconds ago      Running             coredns                     1                   e312838c20b69       coredns-5dd5756b68-wr2cl                         kube-system
	68fd186fabb99       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   d47139f4221f5       storage-provisioner                              kube-system
	52aa628b40d1b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago      Running             kube-proxy                  1                   5241ce4cdbc98       kube-proxy-gdm8d                                 kube-system
	cf17b98ff46bb       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           51 seconds ago      Running             kindnet-cni                 1                   1c9974618a9a1       kindnet-rwfkz                                    kube-system
	39a8d4c108d73       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           56 seconds ago      Running             kube-scheduler              1                   41acfda192068       kube-scheduler-old-k8s-version-512867            kube-system
	4500ff662d9c4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           56 seconds ago      Running             etcd                        1                   1b4cac6dc255e       etcd-old-k8s-version-512867                      kube-system
	0a77089c3baf8       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           56 seconds ago      Running             kube-apiserver              1                   9538b5bdc8f53       kube-apiserver-old-k8s-version-512867            kube-system
	946047cfc9104       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           56 seconds ago      Running             kube-controller-manager     1                   eeb67509d8d00       kube-controller-manager-old-k8s-version-512867   kube-system
	
	
	==> coredns [ac519663e98ff88103f47d543ecf11f3edaef9bbf0f16273dd7dbd903f11025a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55429 - 36971 "HINFO IN 6050377229590491029.8308923331211008531. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056474247s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-512867
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-512867
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=old-k8s-version-512867
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_52_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:52:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-512867
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:53:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:53:45 +0000   Mon, 29 Dec 2025 07:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:53:45 +0000   Mon, 29 Dec 2025 07:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:53:45 +0000   Mon, 29 Dec 2025 07:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:53:45 +0000   Mon, 29 Dec 2025 07:52:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-512867
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                37780f38-62fe-49f0-844c-77007ce7ace0
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-wr2cl                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-old-k8s-version-512867                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-rwfkz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-512867             250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-512867    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-gdm8d                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-512867             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-z7mqw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-cqflv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-512867 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node old-k8s-version-512867 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-512867 event: Registered Node old-k8s-version-512867 in Controller
	  Normal  NodeReady                92s                  kubelet          Node old-k8s-version-512867 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node old-k8s-version-512867 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node old-k8s-version-512867 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                  node-controller  Node old-k8s-version-512867 event: Registered Node old-k8s-version-512867 in Controller
	
	
	==> dmesg <==
	[Dec29 07:24] overlayfs: idmapped layers are currently not supported
	[Dec29 07:25] overlayfs: idmapped layers are currently not supported
	[Dec29 07:26] overlayfs: idmapped layers are currently not supported
	[Dec29 07:29] overlayfs: idmapped layers are currently not supported
	[Dec29 07:30] overlayfs: idmapped layers are currently not supported
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4500ff662d9c4502d7b5756775084ffe6a9bc24207b53bd5b3de45a409cc5b3b] <==
	{"level":"info","ts":"2025-12-29T07:53:10.759618Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:53:10.704984Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-29T07:53:10.705238Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:53:10.75967Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:53:10.759685Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:53:10.705574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-29T07:53:10.760227Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-29T07:53:10.760326Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:53:10.76036Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-29T07:53:10.759568Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:53:10.759602Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:53:12.086914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:53:12.087044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:53:12.087098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:53:12.087139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:53:12.087173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:53:12.087216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:53:12.087251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:53:12.097044Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-512867 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:53:12.097222Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:53:12.098327Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:53:12.10088Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:53:12.118285Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-29T07:53:12.100948Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:53:12.136978Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 07:54:07 up  4:36,  0 user,  load average: 1.56, 1.82, 2.30
	Linux old-k8s-version-512867 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cf17b98ff46bb1bafa413e0da3f2d55d6ac9e3e13454e07df58fdb87b249226e] <==
	I1229 07:53:16.260354       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:53:16.260610       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:53:16.260732       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:53:16.260749       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:53:16.260757       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:53:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:53:16.455672       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:53:16.455689       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:53:16.455698       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:53:16.455986       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1229 07:53:46.455388       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1229 07:53:46.456764       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1229 07:53:46.456993       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1229 07:53:46.457224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1229 07:53:47.755856       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:53:47.755896       1 metrics.go:72] Registering metrics
	I1229 07:53:47.755950       1 controller.go:711] "Syncing nftables rules"
	I1229 07:53:56.461830       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:53:56.461870       1 main.go:301] handling current node
	I1229 07:54:06.460872       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:54:06.460905       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0a77089c3baf8e5da671cb7347c1a4bf5ba26ae15659db5260e0ee7f08a73401] <==
	I1229 07:53:14.700749       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1229 07:53:14.872258       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:53:14.880552       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1229 07:53:14.880627       1 aggregator.go:166] initial CRD sync complete...
	I1229 07:53:14.880635       1 autoregister_controller.go:141] Starting autoregister controller
	I1229 07:53:14.880640       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:53:14.880647       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:53:14.903122       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1229 07:53:14.909272       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1229 07:53:14.909715       1 shared_informer.go:318] Caches are synced for configmaps
	I1229 07:53:14.910655       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:53:14.954599       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1229 07:53:14.954624       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1229 07:53:14.958277       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1229 07:53:15.594908       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1229 07:53:16.770354       1 controller.go:624] quota admission added evaluator for: namespaces
	I1229 07:53:16.843045       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1229 07:53:16.873047       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:53:16.892380       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:53:16.904042       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1229 07:53:16.956647       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.117.181"}
	I1229 07:53:16.975663       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.55.144"}
	I1229 07:53:27.814622       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:53:27.966952       1 controller.go:624] quota admission added evaluator for: endpoints
	I1229 07:53:28.018076       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [946047cfc91045ea55c69cede1ead96528f22108c6c76f45136fa6565ed7d09d] <==
	I1229 07:53:28.031457       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1229 07:53:28.077747       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="320.252246ms"
	I1229 07:53:28.078053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.632µs"
	I1229 07:53:28.083731       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-z7mqw"
	I1229 07:53:28.087868       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-cqflv"
	I1229 07:53:28.100211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.525042ms"
	I1229 07:53:28.105106       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.280238ms"
	I1229 07:53:28.120470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.238873ms"
	I1229 07:53:28.120809       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.948µs"
	I1229 07:53:28.126432       1 shared_informer.go:318] Caches are synced for garbage collector
	I1229 07:53:28.126971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="84.538µs"
	I1229 07:53:28.137829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="37.482487ms"
	I1229 07:53:28.150730       1 shared_informer.go:318] Caches are synced for garbage collector
	I1229 07:53:28.150764       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1229 07:53:28.164617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="26.721753ms"
	I1229 07:53:28.164728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.179µs"
	I1229 07:53:33.825649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.401163ms"
	I1229 07:53:33.825734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.463µs"
	I1229 07:53:37.827810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.315µs"
	I1229 07:53:38.830092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.828µs"
	I1229 07:53:39.828360       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.86µs"
	I1229 07:53:48.039686       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.471945ms"
	I1229 07:53:48.039921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.33µs"
	I1229 07:53:48.867151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.243µs"
	I1229 07:53:58.421421       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.309µs"
	
	
	==> kube-proxy [52aa628b40d1b389c30840509ea0f999ff48964de6ba998a2f49e58559f8d12c] <==
	I1229 07:53:16.343465       1 server_others.go:69] "Using iptables proxy"
	I1229 07:53:16.381423       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1229 07:53:16.489382       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:53:16.500220       1 server_others.go:152] "Using iptables Proxier"
	I1229 07:53:16.500338       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1229 07:53:16.500381       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1229 07:53:16.500443       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1229 07:53:16.500705       1 server.go:846] "Version info" version="v1.28.0"
	I1229 07:53:16.501091       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:53:16.501875       1 config.go:188] "Starting service config controller"
	I1229 07:53:16.501974       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1229 07:53:16.502047       1 config.go:97] "Starting endpoint slice config controller"
	I1229 07:53:16.502077       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1229 07:53:16.515645       1 config.go:315] "Starting node config controller"
	I1229 07:53:16.515751       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1229 07:53:16.604420       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1229 07:53:16.604471       1 shared_informer.go:318] Caches are synced for service config
	I1229 07:53:16.616429       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [39a8d4c108d73b046c789c0e818948e14860be2e2e26ee58c79711385c65a4e9] <==
	I1229 07:53:12.508846       1 serving.go:348] Generated self-signed cert in-memory
	W1229 07:53:14.649397       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:53:14.649433       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:53:14.649445       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:53:14.649453       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:53:14.858443       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1229 07:53:14.858483       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:53:14.870884       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:53:14.877390       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1229 07:53:14.877556       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1229 07:53:14.878102       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1229 07:53:14.978336       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: I1229 07:53:28.095975     792 topology_manager.go:215] "Topology Admit Handler" podUID="22312289-ae02-4dfe-853e-a5e86e53d038" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-cqflv"
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: I1229 07:53:28.100265     792 topology_manager.go:215] "Topology Admit Handler" podUID="ef6533dc-bd6a-4d38-8551-7232da63ac67" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-z7mqw"
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: I1229 07:53:28.260641     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2mpt\" (UniqueName: \"kubernetes.io/projected/ef6533dc-bd6a-4d38-8551-7232da63ac67-kube-api-access-h2mpt\") pod \"dashboard-metrics-scraper-5f989dc9cf-z7mqw\" (UID: \"ef6533dc-bd6a-4d38-8551-7232da63ac67\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw"
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: I1229 07:53:28.260701     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjcf9\" (UniqueName: \"kubernetes.io/projected/22312289-ae02-4dfe-853e-a5e86e53d038-kube-api-access-wjcf9\") pod \"kubernetes-dashboard-8694d4445c-cqflv\" (UID: \"22312289-ae02-4dfe-853e-a5e86e53d038\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cqflv"
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: I1229 07:53:28.260731     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ef6533dc-bd6a-4d38-8551-7232da63ac67-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-z7mqw\" (UID: \"ef6533dc-bd6a-4d38-8551-7232da63ac67\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw"
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: I1229 07:53:28.260758     792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/22312289-ae02-4dfe-853e-a5e86e53d038-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-cqflv\" (UID: \"22312289-ae02-4dfe-853e-a5e86e53d038\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cqflv"
	Dec 29 07:53:28 old-k8s-version-512867 kubelet[792]: W1229 07:53:28.441884     792 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f522ea5fbf770f7d6b092709855060deb0b10f7ee0fb24714786f5ed08ce911f/crio-5e28cd90a62aeb2d4d142c66787c15a9294e583d0c7b57901b4ad190b1b0c43a WatchSource:0}: Error finding container 5e28cd90a62aeb2d4d142c66787c15a9294e583d0c7b57901b4ad190b1b0c43a: Status 404 returned error can't find the container with id 5e28cd90a62aeb2d4d142c66787c15a9294e583d0c7b57901b4ad190b1b0c43a
	Dec 29 07:53:33 old-k8s-version-512867 kubelet[792]: I1229 07:53:33.815138     792 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-cqflv" podStartSLOduration=1.2941749790000001 podCreationTimestamp="2025-12-29 07:53:28 +0000 UTC" firstStartedPulling="2025-12-29 07:53:28.449838484 +0000 UTC m=+19.074227888" lastFinishedPulling="2025-12-29 07:53:32.970703933 +0000 UTC m=+23.595093337" observedRunningTime="2025-12-29 07:53:33.813970695 +0000 UTC m=+24.438360115" watchObservedRunningTime="2025-12-29 07:53:33.815040428 +0000 UTC m=+24.439429840"
	Dec 29 07:53:37 old-k8s-version-512867 kubelet[792]: I1229 07:53:37.804750     792 scope.go:117] "RemoveContainer" containerID="be2506d276754908c867bd8059d56779bfd827c725bd05ea2dc2ee29ec51fbb4"
	Dec 29 07:53:38 old-k8s-version-512867 kubelet[792]: I1229 07:53:38.808884     792 scope.go:117] "RemoveContainer" containerID="be2506d276754908c867bd8059d56779bfd827c725bd05ea2dc2ee29ec51fbb4"
	Dec 29 07:53:38 old-k8s-version-512867 kubelet[792]: I1229 07:53:38.809189     792 scope.go:117] "RemoveContainer" containerID="c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2"
	Dec 29 07:53:38 old-k8s-version-512867 kubelet[792]: E1229 07:53:38.809716     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z7mqw_kubernetes-dashboard(ef6533dc-bd6a-4d38-8551-7232da63ac67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw" podUID="ef6533dc-bd6a-4d38-8551-7232da63ac67"
	Dec 29 07:53:39 old-k8s-version-512867 kubelet[792]: I1229 07:53:39.812971     792 scope.go:117] "RemoveContainer" containerID="c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2"
	Dec 29 07:53:39 old-k8s-version-512867 kubelet[792]: E1229 07:53:39.813260     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z7mqw_kubernetes-dashboard(ef6533dc-bd6a-4d38-8551-7232da63ac67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw" podUID="ef6533dc-bd6a-4d38-8551-7232da63ac67"
	Dec 29 07:53:46 old-k8s-version-512867 kubelet[792]: I1229 07:53:46.838624     792 scope.go:117] "RemoveContainer" containerID="68fd186fabb99a7faa9f17108a6cf9ed8b90d645a067f806eb0d6aed359f08ac"
	Dec 29 07:53:48 old-k8s-version-512867 kubelet[792]: I1229 07:53:48.402725     792 scope.go:117] "RemoveContainer" containerID="c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2"
	Dec 29 07:53:48 old-k8s-version-512867 kubelet[792]: I1229 07:53:48.847415     792 scope.go:117] "RemoveContainer" containerID="c6e7c9888132c206796f11936eb15ea30ffcd5f71e99f9c4b1f8ac3d094076a2"
	Dec 29 07:53:48 old-k8s-version-512867 kubelet[792]: I1229 07:53:48.847874     792 scope.go:117] "RemoveContainer" containerID="b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3"
	Dec 29 07:53:48 old-k8s-version-512867 kubelet[792]: E1229 07:53:48.848168     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z7mqw_kubernetes-dashboard(ef6533dc-bd6a-4d38-8551-7232da63ac67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw" podUID="ef6533dc-bd6a-4d38-8551-7232da63ac67"
	Dec 29 07:53:58 old-k8s-version-512867 kubelet[792]: I1229 07:53:58.402421     792 scope.go:117] "RemoveContainer" containerID="b7b38088b9bb759e8bf9b49863b3af6f8000f783174b619298cf66d8f37b75e3"
	Dec 29 07:53:58 old-k8s-version-512867 kubelet[792]: E1229 07:53:58.402748     792 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z7mqw_kubernetes-dashboard(ef6533dc-bd6a-4d38-8551-7232da63ac67)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z7mqw" podUID="ef6533dc-bd6a-4d38-8551-7232da63ac67"
	Dec 29 07:54:01 old-k8s-version-512867 kubelet[792]: I1229 07:54:01.754629     792 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 29 07:54:01 old-k8s-version-512867 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:54:01 old-k8s-version-512867 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:54:01 old-k8s-version-512867 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [43102affada64f3948914ab09a70e4608524e9afb6ab6d838c6ad810b7638c93] <==
	2025/12/29 07:53:33 Using namespace: kubernetes-dashboard
	2025/12/29 07:53:33 Using in-cluster config to connect to apiserver
	2025/12/29 07:53:33 Using secret token for csrf signing
	2025/12/29 07:53:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:53:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:53:33 Successful initial request to the apiserver, version: v1.28.0
	2025/12/29 07:53:33 Generating JWE encryption key
	2025/12/29 07:53:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:53:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:53:33 Initializing JWE encryption key from synchronized object
	2025/12/29 07:53:33 Creating in-cluster Sidecar client
	2025/12/29 07:53:33 Serving insecurely on HTTP port: 9090
	2025/12/29 07:53:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:54:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:53:33 Starting overwatch
	
	
	==> storage-provisioner [68fd186fabb99a7faa9f17108a6cf9ed8b90d645a067f806eb0d6aed359f08ac] <==
	I1229 07:53:16.130629       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:53:46.137608       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ec432e862408cfa12c66b8854611c25341511d99e01a67f249f6e73b55a781a5] <==
	I1229 07:53:46.885359       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:53:46.898956       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:53:46.899030       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1229 07:54:04.299291       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:54:04.301292       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-512867_b8e69629-30c1-449b-95bf-5847e0a7210a!
	I1229 07:54:04.303099       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"705bfbd5-8239-46ab-90c2-30c511af11a6", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-512867_b8e69629-30c1-449b-95bf-5847e0a7210a became leader
	I1229 07:54:04.402067       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-512867_b8e69629-30c1-449b-95bf-5847e0a7210a!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-512867 -n old-k8s-version-512867
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-512867 -n old-k8s-version-512867: exit status 2 (353.139819ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-512867 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-822299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-822299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (253.993575ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:55:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-822299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-822299 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-822299 describe deploy/metrics-server -n kube-system: exit status 1 (92.624976ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-822299 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-822299
helpers_test.go:244: (dbg) docker inspect no-preload-822299:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70",
	        "Created": "2025-12-29T07:54:12.225052256Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 928306,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:54:12.295233415Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/hostname",
	        "HostsPath": "/var/lib/docker/containers/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/hosts",
	        "LogPath": "/var/lib/docker/containers/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70-json.log",
	        "Name": "/no-preload-822299",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-822299:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-822299",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70",
	                "LowerDir": "/var/lib/docker/overlay2/f9c14dc7b3402416cae82c68b5d58f6bb280c659b388cc078b450f2d2e938cb3-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9c14dc7b3402416cae82c68b5d58f6bb280c659b388cc078b450f2d2e938cb3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9c14dc7b3402416cae82c68b5d58f6bb280c659b388cc078b450f2d2e938cb3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9c14dc7b3402416cae82c68b5d58f6bb280c659b388cc078b450f2d2e938cb3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-822299",
	                "Source": "/var/lib/docker/volumes/no-preload-822299/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-822299",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-822299",
	                "name.minikube.sigs.k8s.io": "no-preload-822299",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c4239c1377da0a2fcc01e7b424cdb852414f03701c225ae7d406dab661018cec",
	            "SandboxKey": "/var/run/docker/netns/c4239c1377da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33827"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33825"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33826"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-822299": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:1b:64:04:11:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9fa6b3c21ae85fa8b96c74beecaa8ac1deb81e91b1a2aaf65c5b20de7bb77be6",
	                    "EndpointID": "8049be19320558522b829b0b2ffabd1d1e6e5b6a90170605f76ecf62042fd154",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-822299",
	                        "199b371fef13"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-822299 -n no-preload-822299
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-822299 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-822299 logs -n 25: (1.213248608s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-095300 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ ssh     │ -p cilium-095300 sudo crio config                                                                                                                                                                                                             │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │                     │
	│ delete  │ -p cilium-095300                                                                                                                                                                                                                              │ cilium-095300             │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │ 29 Dec 25 07:45 UTC │
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │ 29 Dec 25 07:46 UTC │
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ delete  │ -p cert-expiration-853545                                                                                                                                                                                                                     │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ start   │ -p force-systemd-flag-122604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-122604 │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │                     │
	│ delete  │ -p force-systemd-env-002562                                                                                                                                                                                                                   │ force-systemd-env-002562  │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p cert-options-600463 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ cert-options-600463 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ -p cert-options-600463 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ delete  │ -p cert-options-600463                                                                                                                                                                                                                        │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-512867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │                     │
	│ stop    │ -p old-k8s-version-512867 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │ 29 Dec 25 07:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-512867 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ image   │ old-k8s-version-512867 image list --format=json                                                                                                                                                                                               │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ pause   │ -p old-k8s-version-512867 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │                     │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-822299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:54:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:54:11.207728  927984 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:54:11.207908  927984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:54:11.207917  927984 out.go:374] Setting ErrFile to fd 2...
	I1229 07:54:11.207923  927984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:54:11.208175  927984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:54:11.208658  927984 out.go:368] Setting JSON to false
	I1229 07:54:11.209600  927984 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16603,"bootTime":1766978248,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:54:11.209676  927984 start.go:143] virtualization:  
	I1229 07:54:11.215999  927984 out.go:179] * [no-preload-822299] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:54:11.219909  927984 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:54:11.219980  927984 notify.go:221] Checking for updates...
	I1229 07:54:11.226358  927984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:54:11.229493  927984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:54:11.232631  927984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:54:11.235730  927984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:54:11.238779  927984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:54:11.242448  927984 config.go:182] Loaded profile config "force-systemd-flag-122604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:54:11.242560  927984 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:54:11.265920  927984 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:54:11.266048  927984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:54:11.325686  927984 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:54:11.315617294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:54:11.325831  927984 docker.go:319] overlay module found
	I1229 07:54:11.328867  927984 out.go:179] * Using the docker driver based on user configuration
	I1229 07:54:11.331870  927984 start.go:309] selected driver: docker
	I1229 07:54:11.331926  927984 start.go:928] validating driver "docker" against <nil>
	I1229 07:54:11.331945  927984 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:54:11.332659  927984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:54:11.385885  927984 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:54:11.376801204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:54:11.386034  927984 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:54:11.386256  927984 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:54:11.389273  927984 out.go:179] * Using Docker driver with root privileges
	I1229 07:54:11.392221  927984 cni.go:84] Creating CNI manager for ""
	I1229 07:54:11.392289  927984 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:54:11.392303  927984 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:54:11.392408  927984 start.go:353] cluster config:
	{Name:no-preload-822299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:54:11.395575  927984 out.go:179] * Starting "no-preload-822299" primary control-plane node in "no-preload-822299" cluster
	I1229 07:54:11.398350  927984 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:54:11.401334  927984 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:54:11.404239  927984 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:54:11.404309  927984 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:54:11.404406  927984 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/config.json ...
	I1229 07:54:11.404450  927984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/config.json: {Name:mkad45ebb021681a5d2d5c3a1d409dfc810c7544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:54:11.404721  927984 cache.go:107] acquiring lock: {Name:mk77910156d27401bb7909a31b93ec051f9aa764 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:54:11.404864  927984 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1229 07:54:11.404949  927984 cache.go:107] acquiring lock: {Name:mkfe408e55325d204df091c798ec530361ec90e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:54:11.405013  927984 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1229 07:54:11.405019  927984 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 75.053µs
	I1229 07:54:11.405040  927984 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1229 07:54:11.405056  927984 cache.go:107] acquiring lock: {Name:mk7acdd26b6f5f36232f222f1b8b250f3e55a098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:54:11.405093  927984 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1229 07:54:11.405101  927984 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 45.94µs
	I1229 07:54:11.405106  927984 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1229 07:54:11.405115  927984 cache.go:107] acquiring lock: {Name:mk6c222e06ab2a6eba07c71c6bbf60553a466256 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:54:11.405142  927984 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1229 07:54:11.405155  927984 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 32.813µs
	I1229 07:54:11.405161  927984 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1229 07:54:11.405180  927984 cache.go:107] acquiring lock: {Name:mkc4b5fee2fc8ad3886c6f422b410c403d3cd271 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:54:11.405206  927984 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1229 07:54:11.405211  927984 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 32.878µs
	I1229 07:54:11.405217  927984 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1229 07:54:11.405229  927984 cache.go:107] acquiring lock: {Name:mk42d4be1fc1ab08338ed03f966f32cf4e137a60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:54:11.405255  927984 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1229 07:54:11.405275  927984 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 31.36µs
	I1229 07:54:11.405281  927984 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1229 07:54:11.405291  927984 cache.go:107] acquiring lock: {Name:mk7e5915b6ec72206d698779b421c83aee716ab1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:54:11.405316  927984 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1229 07:54:11.405322  927984 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.008µs
	I1229 07:54:11.405327  927984 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1229 07:54:11.405336  927984 cache.go:107] acquiring lock: {Name:mkb59ef52ccfb9558cc0c53f2d458d37076da56a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:54:11.405360  927984 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1229 07:54:11.405365  927984 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 29.555µs
	I1229 07:54:11.405370  927984 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1229 07:54:11.404892  927984 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 176.994µs
	I1229 07:54:11.405468  927984 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1229 07:54:11.405493  927984 cache.go:87] Successfully saved all images to host disk.
	I1229 07:54:11.424341  927984 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:54:11.424364  927984 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:54:11.424381  927984 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:54:11.424411  927984 start.go:360] acquireMachinesLock for no-preload-822299: {Name:mk8778cb7dc7747a72362114977db1c0dacca8bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:54:11.424512  927984 start.go:364] duration metric: took 81.707µs to acquireMachinesLock for "no-preload-822299"
	I1229 07:54:11.424540  927984 start.go:93] Provisioning new machine with config: &{Name:no-preload-822299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:54:11.424619  927984 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:54:11.429848  927984 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:54:11.430112  927984 start.go:159] libmachine.API.Create for "no-preload-822299" (driver="docker")
	I1229 07:54:11.430155  927984 client.go:173] LocalClient.Create starting
	I1229 07:54:11.430239  927984 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 07:54:11.430276  927984 main.go:144] libmachine: Decoding PEM data...
	I1229 07:54:11.430298  927984 main.go:144] libmachine: Parsing certificate...
	I1229 07:54:11.430357  927984 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 07:54:11.430398  927984 main.go:144] libmachine: Decoding PEM data...
	I1229 07:54:11.430416  927984 main.go:144] libmachine: Parsing certificate...
	I1229 07:54:11.430808  927984 cli_runner.go:164] Run: docker network inspect no-preload-822299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:54:11.447192  927984 cli_runner.go:211] docker network inspect no-preload-822299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:54:11.447273  927984 network_create.go:284] running [docker network inspect no-preload-822299] to gather additional debugging logs...
	I1229 07:54:11.447297  927984 cli_runner.go:164] Run: docker network inspect no-preload-822299
	W1229 07:54:11.463363  927984 cli_runner.go:211] docker network inspect no-preload-822299 returned with exit code 1
	I1229 07:54:11.463400  927984 network_create.go:287] error running [docker network inspect no-preload-822299]: docker network inspect no-preload-822299: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-822299 not found
	I1229 07:54:11.463414  927984 network_create.go:289] output of [docker network inspect no-preload-822299]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-822299 not found
	
	** /stderr **
	I1229 07:54:11.463507  927984 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:54:11.480059  927984 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
	I1229 07:54:11.480420  927984 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce89f48ecd30 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:ee:74:81:93:1a} reservation:<nil>}
	I1229 07:54:11.480758  927984 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e2ff2b4ebfe IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:c7:24:a7:9b:5d} reservation:<nil>}
	I1229 07:54:11.481045  927984 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-714d1dc9bdce IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:13:37:2f:fc:8f} reservation:<nil>}
	I1229 07:54:11.481465  927984 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c3a90}
	I1229 07:54:11.481488  927984 network_create.go:124] attempt to create docker network no-preload-822299 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 07:54:11.481545  927984 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-822299 no-preload-822299
	I1229 07:54:11.544420  927984 network_create.go:108] docker network no-preload-822299 192.168.85.0/24 created
	I1229 07:54:11.544454  927984 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-822299" container
	I1229 07:54:11.544554  927984 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:54:11.560415  927984 cli_runner.go:164] Run: docker volume create no-preload-822299 --label name.minikube.sigs.k8s.io=no-preload-822299 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:54:11.578638  927984 oci.go:103] Successfully created a docker volume no-preload-822299
	I1229 07:54:11.578723  927984 cli_runner.go:164] Run: docker run --rm --name no-preload-822299-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-822299 --entrypoint /usr/bin/test -v no-preload-822299:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:54:12.150878  927984 oci.go:107] Successfully prepared a docker volume no-preload-822299
	I1229 07:54:12.150948  927984 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	W1229 07:54:12.151094  927984 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:54:12.151219  927984 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:54:12.209975  927984 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-822299 --name no-preload-822299 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-822299 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-822299 --network no-preload-822299 --ip 192.168.85.2 --volume no-preload-822299:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:54:12.518222  927984 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Running}}
	I1229 07:54:12.539592  927984 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:54:12.565662  927984 cli_runner.go:164] Run: docker exec no-preload-822299 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:54:12.623958  927984 oci.go:144] the created container "no-preload-822299" has a running status.
	I1229 07:54:12.624002  927984 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa...
	I1229 07:54:13.295606  927984 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:54:13.325073  927984 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:54:13.353012  927984 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:54:13.353039  927984 kic_runner.go:114] Args: [docker exec --privileged no-preload-822299 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:54:13.412455  927984 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:54:13.436559  927984 machine.go:94] provisionDockerMachine start ...
	I1229 07:54:13.436668  927984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:54:13.471383  927984 main.go:144] libmachine: Using SSH client type: native
	I1229 07:54:13.471731  927984 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33823 <nil> <nil>}
	I1229 07:54:13.471748  927984 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:54:13.664979  927984 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-822299
	
	I1229 07:54:13.665006  927984 ubuntu.go:182] provisioning hostname "no-preload-822299"
	I1229 07:54:13.665082  927984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:54:13.690536  927984 main.go:144] libmachine: Using SSH client type: native
	I1229 07:54:13.690850  927984 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33823 <nil> <nil>}
	I1229 07:54:13.690868  927984 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-822299 && echo "no-preload-822299" | sudo tee /etc/hostname
	I1229 07:54:13.855461  927984 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-822299
	
	I1229 07:54:13.855540  927984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:54:13.879758  927984 main.go:144] libmachine: Using SSH client type: native
	I1229 07:54:13.880059  927984 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33823 <nil> <nil>}
	I1229 07:54:13.880076  927984 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-822299' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-822299/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-822299' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:54:14.037241  927984 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:54:14.037312  927984 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:54:14.037349  927984 ubuntu.go:190] setting up certificates
	I1229 07:54:14.037358  927984 provision.go:84] configureAuth start
	I1229 07:54:14.037419  927984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-822299
	I1229 07:54:14.054898  927984 provision.go:143] copyHostCerts
	I1229 07:54:14.054973  927984 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:54:14.054988  927984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:54:14.055071  927984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:54:14.055172  927984 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:54:14.055184  927984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:54:14.055214  927984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:54:14.055270  927984 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:54:14.055279  927984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:54:14.055304  927984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:54:14.055358  927984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.no-preload-822299 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-822299]
	I1229 07:54:14.309929  927984 provision.go:177] copyRemoteCerts
	I1229 07:54:14.310002  927984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:54:14.310081  927984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:54:14.327272  927984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:54:14.432713  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:54:14.450814  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:54:14.468975  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:54:14.486209  927984 provision.go:87] duration metric: took 448.837254ms to configureAuth
	I1229 07:54:14.486240  927984 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:54:14.486460  927984 config.go:182] Loaded profile config "no-preload-822299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:54:14.486561  927984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:54:14.503550  927984 main.go:144] libmachine: Using SSH client type: native
	I1229 07:54:14.503882  927984 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33823 <nil> <nil>}
	I1229 07:54:14.503903  927984 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:54:14.815782  927984 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:54:14.815852  927984 machine.go:97] duration metric: took 1.379265662s to provisionDockerMachine
	I1229 07:54:14.815868  927984 client.go:176] duration metric: took 3.385698821s to LocalClient.Create
	I1229 07:54:14.815889  927984 start.go:167] duration metric: took 3.385783482s to libmachine.API.Create "no-preload-822299"
	I1229 07:54:14.815900  927984 start.go:293] postStartSetup for "no-preload-822299" (driver="docker")
	I1229 07:54:14.815923  927984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:54:14.816003  927984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:54:14.816056  927984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:54:14.841960  927984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:54:14.951226  927984 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:54:14.955645  927984 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:54:14.955672  927984 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:54:14.955683  927984 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:54:14.955739  927984 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:54:14.955823  927984 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:54:14.955932  927984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:54:14.964962  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:54:14.985805  927984 start.go:296] duration metric: took 169.887724ms for postStartSetup
	I1229 07:54:14.986260  927984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-822299
	I1229 07:54:15.011959  927984 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/config.json ...
	I1229 07:54:15.013270  927984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:54:15.013326  927984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:54:15.035650  927984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:54:15.142188  927984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:54:15.147228  927984 start.go:128] duration metric: took 3.722593794s to createHost
	I1229 07:54:15.147255  927984 start.go:83] releasing machines lock for "no-preload-822299", held for 3.722731551s
	I1229 07:54:15.147340  927984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-822299
	I1229 07:54:15.165578  927984 ssh_runner.go:195] Run: cat /version.json
	I1229 07:54:15.165639  927984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:54:15.165918  927984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:54:15.165998  927984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:54:15.185633  927984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:54:15.194611  927984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:54:15.292677  927984 ssh_runner.go:195] Run: systemctl --version
	I1229 07:54:15.386429  927984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:54:15.421706  927984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:54:15.426126  927984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:54:15.426218  927984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:54:15.456484  927984 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:54:15.456519  927984 start.go:496] detecting cgroup driver to use...
	I1229 07:54:15.456554  927984 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:54:15.456679  927984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:54:15.475922  927984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:54:15.489117  927984 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:54:15.489183  927984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:54:15.507454  927984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:54:15.528321  927984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:54:15.644461  927984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:54:15.802590  927984 docker.go:234] disabling docker service ...
	I1229 07:54:15.802658  927984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:54:15.823978  927984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:54:15.836851  927984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:54:15.955890  927984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:54:16.077606  927984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:54:16.090531  927984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:54:16.104963  927984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:54:16.105047  927984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:54:16.114759  927984 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:54:16.114855  927984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:54:16.124089  927984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:54:16.133001  927984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:54:16.142033  927984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:54:16.150278  927984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:54:16.159009  927984 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:54:16.172518  927984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:54:16.181591  927984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:54:16.189277  927984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:54:16.196730  927984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:54:16.316124  927984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:54:16.495632  927984 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:54:16.495704  927984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:54:16.505268  927984 start.go:574] Will wait 60s for crictl version
	I1229 07:54:16.505361  927984 ssh_runner.go:195] Run: which crictl
	I1229 07:54:16.509858  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:54:16.538477  927984 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:54:16.538572  927984 ssh_runner.go:195] Run: crio --version
	I1229 07:54:16.575408  927984 ssh_runner.go:195] Run: crio --version
	I1229 07:54:16.617113  927984 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:54:16.620001  927984 cli_runner.go:164] Run: docker network inspect no-preload-822299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:54:16.636632  927984 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:54:16.640925  927984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:54:16.651411  927984 kubeadm.go:884] updating cluster {Name:no-preload-822299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:54:16.651535  927984 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:54:16.651590  927984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:54:16.677051  927984 crio.go:557] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I1229 07:54:16.677083  927984 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1229 07:54:16.677134  927984 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:54:16.677363  927984 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:54:16.677507  927984 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:54:16.677599  927984 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:54:16.677714  927984 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:54:16.677826  927984 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1229 07:54:16.677926  927984 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:54:16.678028  927984 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:54:16.679048  927984 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:54:16.679468  927984 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:54:16.679613  927984 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:54:16.679821  927984 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:54:16.679947  927984 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:54:16.680175  927984 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:54:16.680336  927984 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1229 07:54:16.680959  927984 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:54:17.007432  927984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:54:17.014952  927984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:54:17.016450  927984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1229 07:54:17.029094  927984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:54:17.029953  927984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:54:17.030926  927984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:54:17.054471  927984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1229 07:54:17.098342  927984 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5" in container runtime
	I1229 07:54:17.098424  927984 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:54:17.098486  927984 ssh_runner.go:195] Run: which crictl
	I1229 07:54:17.151243  927984 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0" in container runtime
	I1229 07:54:17.151348  927984 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:54:17.151417  927984 ssh_runner.go:195] Run: which crictl
	I1229 07:54:17.158180  927984 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1229 07:54:17.158278  927984 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1229 07:54:17.158341  927984 ssh_runner.go:195] Run: which crictl
	I1229 07:54:17.180232  927984 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1229 07:54:17.180334  927984 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:54:17.180404  927984 ssh_runner.go:195] Run: which crictl
	I1229 07:54:17.183551  927984 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f" in container runtime
	I1229 07:54:17.183618  927984 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:54:17.183691  927984 ssh_runner.go:195] Run: which crictl
	I1229 07:54:17.187399  927984 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856" in container runtime
	I1229 07:54:17.187718  927984 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:54:17.187784  927984 ssh_runner.go:195] Run: which crictl
	I1229 07:54:17.187491  927984 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1229 07:54:17.187893  927984 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1229 07:54:17.187949  927984 ssh_runner.go:195] Run: which crictl
	I1229 07:54:17.187606  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:54:17.187636  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:54:17.187676  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:54:17.187690  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:54:17.194423  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:54:17.304556  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:54:17.304679  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:54:17.304746  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:54:17.304843  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:54:17.304847  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:54:17.304905  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:54:17.304919  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:54:17.420375  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1229 07:54:17.420460  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:54:17.420514  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:54:17.420570  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1229 07:54:17.420619  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1229 07:54:17.420682  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1229 07:54:17.420744  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1229 07:54:17.526140  927984 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0
	I1229 07:54:17.526289  927984 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:54:17.526387  927984 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0
	I1229 07:54:17.526467  927984 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:54:17.526562  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1229 07:54:17.526676  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1229 07:54:17.526760  927984 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0
	I1229 07:54:17.526839  927984 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:54:17.526921  927984 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1229 07:54:17.526988  927984 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1229 07:54:17.527068  927984 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1229 07:54:17.527138  927984 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:54:17.572068  927984 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1229 07:54:17.572302  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1229 07:54:17.572156  927984 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
	I1229 07:54:17.572415  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (15415808 bytes)
	I1229 07:54:17.572175  927984 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
	I1229 07:54:17.572504  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (22434816 bytes)
	I1229 07:54:17.572206  927984 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0
	I1229 07:54:17.572683  927984 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1229 07:54:17.572225  927984 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1229 07:54:17.572856  927984 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1229 07:54:17.572243  927984 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
	I1229 07:54:17.572907  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (20682752 bytes)
	I1229 07:54:17.572260  927984 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1229 07:54:17.572947  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1229 07:54:17.636113  927984 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
	I1229 07:54:17.636158  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (24702976 bytes)
	I1229 07:54:17.636201  927984 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1229 07:54:17.636218  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1229 07:54:17.656109  927984 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1229 07:54:17.656232  927984 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1229 07:54:17.968347  927984 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1229 07:54:17.968645  927984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:54:17.999578  927984 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1229 07:54:18.133078  927984 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1229 07:54:18.133168  927984 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:54:18.133261  927984 ssh_runner.go:195] Run: which crictl
	I1229 07:54:18.141475  927984 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:54:18.141563  927984 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1229 07:54:18.210563  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:54:19.772152  927984 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0: (1.630560812s)
	I1229 07:54:19.772183  927984 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I1229 07:54:19.772211  927984 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:54:19.772258  927984 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1229 07:54:19.772352  927984 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.561703432s)
	I1229 07:54:19.772388  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:54:20.952624  927984 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.180331776s)
	I1229 07:54:20.952694  927984 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1229 07:54:20.952729  927984 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:54:20.952867  927984 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1229 07:54:20.952948  927984 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.180544381s)
	I1229 07:54:20.953015  927984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:54:22.203016  927984 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.24995659s)
	I1229 07:54:22.203114  927984 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0: (1.250208612s)
	I1229 07:54:22.203140  927984 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
	I1229 07:54:22.203160  927984 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1229 07:54:22.203163  927984 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:54:22.203309  927984 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0
	I1229 07:54:22.203311  927984 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1229 07:54:23.479101  927984 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.275710834s)
	I1229 07:54:23.479131  927984 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1229 07:54:23.479165  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1229 07:54:23.479301  927984 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0: (1.275970856s)
	I1229 07:54:23.479329  927984 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I1229 07:54:23.479365  927984 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1229 07:54:23.479424  927984 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1229 07:54:25.296197  927984 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.816744539s)
	I1229 07:54:25.296229  927984 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1229 07:54:25.296248  927984 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1229 07:54:25.296300  927984 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1229 07:54:26.663136  927984 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.366809694s)
	I1229 07:54:26.663177  927984 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
	I1229 07:54:26.663208  927984 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1229 07:54:26.663261  927984 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1229 07:54:27.234207  927984 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1229 07:54:27.234307  927984 cache_images.go:125] Successfully loaded all cached images
	I1229 07:54:27.234329  927984 cache_images.go:94] duration metric: took 10.557232395s to LoadCachedImages
	I1229 07:54:27.234360  927984 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1229 07:54:27.234454  927984 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-822299 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:54:27.234539  927984 ssh_runner.go:195] Run: crio config
	I1229 07:54:27.305373  927984 cni.go:84] Creating CNI manager for ""
	I1229 07:54:27.305435  927984 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:54:27.305459  927984 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:54:27.305483  927984 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-822299 NodeName:no-preload-822299 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:54:27.305604  927984 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-822299"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:54:27.305682  927984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:54:27.313464  927984 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I1229 07:54:27.313528  927984 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I1229 07:54:27.322017  927984 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
	I1229 07:54:27.322031  927984 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet.sha256
	I1229 07:54:27.322115  927984 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I1229 07:54:27.322175  927984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:54:27.322208  927984 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm.sha256
	I1229 07:54:27.322264  927984 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1229 07:54:27.338881  927984 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I1229 07:54:27.338921  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/cache/linux/arm64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (68354232 bytes)
	I1229 07:54:27.338981  927984 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I1229 07:54:27.338993  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/cache/linux/arm64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (55247032 bytes)
	I1229 07:54:27.339097  927984 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I1229 07:54:27.356045  927984 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I1229 07:54:27.356083  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/cache/linux/arm64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (54329636 bytes)
	I1229 07:54:28.180976  927984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:54:28.193249  927984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1229 07:54:28.213235  927984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:54:28.226989  927984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1229 07:54:28.243289  927984 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:54:28.247408  927984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:54:28.258726  927984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:54:28.369956  927984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:54:28.386318  927984 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299 for IP: 192.168.85.2
	I1229 07:54:28.386342  927984 certs.go:195] generating shared ca certs ...
	I1229 07:54:28.386359  927984 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:54:28.386494  927984 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:54:28.386544  927984 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:54:28.386556  927984 certs.go:257] generating profile certs ...
	I1229 07:54:28.386609  927984 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.key
	I1229 07:54:28.386626  927984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt with IP's: []
	I1229 07:54:28.425214  927984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt ...
	I1229 07:54:28.425291  927984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: {Name:mk9423fe68517746be7636e3f536f799628e0fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:54:28.425511  927984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.key ...
	I1229 07:54:28.425546  927984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.key: {Name:mkaf0c6e9c9a1fc3caff5d05c03d7163cdc51076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:54:28.425679  927984 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.key.b9b3cced
	I1229 07:54:28.425716  927984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.crt.b9b3cced with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1229 07:54:28.492231  927984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.crt.b9b3cced ...
	I1229 07:54:28.492395  927984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.crt.b9b3cced: {Name:mk3dc840914ae217cf124fda448ac3958be0bcf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:54:28.492622  927984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.key.b9b3cced ...
	I1229 07:54:28.492661  927984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.key.b9b3cced: {Name:mkaf0d0d98ce72a55e527cb33f05c3a3fb395474 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:54:28.492842  927984 certs.go:382] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.crt.b9b3cced -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.crt
	I1229 07:54:28.492970  927984 certs.go:386] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.key.b9b3cced -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.key
	I1229 07:54:28.493060  927984 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.key
	I1229 07:54:28.493108  927984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.crt with IP's: []
	I1229 07:54:28.852923  927984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.crt ...
	I1229 07:54:28.852958  927984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.crt: {Name:mkb3e3a4f323ed56c82ddf8a63ac63729784cdb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:54:28.853146  927984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.key ...
	I1229 07:54:28.853161  927984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.key: {Name:mkd66b57473806b99894c6a3821f74ace0dd185d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:54:28.853352  927984 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:54:28.853403  927984 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:54:28.853419  927984 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:54:28.853446  927984 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:54:28.853476  927984 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:54:28.853504  927984 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:54:28.853554  927984 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:54:28.854134  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:54:28.873140  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:54:28.891298  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:54:28.910579  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:54:28.929025  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:54:28.947217  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:54:28.964538  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:54:28.983398  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:54:29.003033  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:54:29.022576  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:54:29.040341  927984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:54:29.058911  927984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:54:29.072025  927984 ssh_runner.go:195] Run: openssl version
	I1229 07:54:29.079292  927984 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:54:29.086740  927984 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:54:29.094784  927984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:54:29.098535  927984 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:54:29.098601  927984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:54:29.139383  927984 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:54:29.146808  927984 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/741854.pem /etc/ssl/certs/51391683.0
	I1229 07:54:29.154191  927984 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:54:29.161684  927984 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:54:29.169322  927984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:54:29.172993  927984 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:54:29.173084  927984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:54:29.234915  927984 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:54:29.242596  927984 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7418542.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:54:29.250071  927984 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:54:29.266530  927984 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:54:29.274077  927984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:54:29.280776  927984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:54:29.280874  927984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:54:29.323450  927984 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:54:29.330836  927984 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:54:29.338296  927984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:54:29.341885  927984 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:54:29.341943  927984 kubeadm.go:401] StartCluster: {Name:no-preload-822299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:54:29.342051  927984 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:54:29.342147  927984 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:54:29.371302  927984 cri.go:96] found id: ""
	I1229 07:54:29.371426  927984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:54:29.379437  927984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:54:29.387379  927984 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:54:29.387490  927984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:54:29.395086  927984 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:54:29.395110  927984 kubeadm.go:158] found existing configuration files:
	
	I1229 07:54:29.395187  927984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:54:29.403112  927984 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:54:29.403182  927984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:54:29.410565  927984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:54:29.418213  927984 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:54:29.418309  927984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:54:29.425510  927984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:54:29.432970  927984 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:54:29.433083  927984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:54:29.440707  927984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:54:29.448523  927984 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:54:29.448640  927984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:54:29.456230  927984 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:54:29.502826  927984 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:54:29.503209  927984 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:54:29.581387  927984 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:54:29.581505  927984 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:54:29.581569  927984 kubeadm.go:319] OS: Linux
	I1229 07:54:29.581675  927984 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:54:29.581763  927984 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:54:29.581843  927984 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:54:29.581958  927984 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:54:29.582053  927984 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:54:29.582165  927984 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:54:29.582235  927984 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:54:29.582313  927984 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:54:29.582385  927984 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:54:29.649565  927984 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:54:29.649744  927984 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:54:29.649889  927984 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:54:29.669213  927984 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:54:29.680263  927984 out.go:252]   - Generating certificates and keys ...
	I1229 07:54:29.680361  927984 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:54:29.680457  927984 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:54:29.928017  927984 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:54:30.233009  927984 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:54:30.825604  927984 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:54:31.244485  927984 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:54:31.355453  927984 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:54:31.355907  927984 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-822299] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:54:31.820756  927984 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:54:31.821226  927984 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-822299] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:54:31.868001  927984 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:54:32.578185  927984 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:54:32.767881  927984 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:54:32.768188  927984 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:54:33.188669  927984 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:54:33.299127  927984 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:54:33.445119  927984 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:54:33.693290  927984 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:54:33.896503  927984 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:54:33.897321  927984 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:54:33.900347  927984 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:54:33.929580  927984 out.go:252]   - Booting up control plane ...
	I1229 07:54:33.929774  927984 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:54:33.929913  927984 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:54:33.930034  927984 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:54:33.930185  927984 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:54:33.930354  927984 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:54:33.930904  927984 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:54:33.931225  927984 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:54:33.931271  927984 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:54:34.080973  927984 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:54:34.081095  927984 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:54:35.081825  927984 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000890791s
	I1229 07:54:35.085310  927984 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 07:54:35.085406  927984 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1229 07:54:35.085738  927984 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 07:54:35.085831  927984 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 07:54:36.595233  927984 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.509432549s
	I1229 07:54:38.271277  927984 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.185907606s
	I1229 07:54:40.087748  927984 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002312712s
	I1229 07:54:40.123202  927984 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:54:40.139033  927984 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:54:40.154073  927984 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:54:40.154295  927984 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-822299 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:54:40.166666  927984 kubeadm.go:319] [bootstrap-token] Using token: p7bu0k.xd8niur5cfbd1drw
	I1229 07:54:40.169844  927984 out.go:252]   - Configuring RBAC rules ...
	I1229 07:54:40.169996  927984 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:54:40.175048  927984 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:54:40.184458  927984 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:54:40.189211  927984 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:54:40.195662  927984 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:54:40.199783  927984 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:54:40.495034  927984 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:54:40.920871  927984 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:54:41.494383  927984 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:54:41.495682  927984 kubeadm.go:319] 
	I1229 07:54:41.495760  927984 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:54:41.495770  927984 kubeadm.go:319] 
	I1229 07:54:41.495848  927984 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:54:41.495857  927984 kubeadm.go:319] 
	I1229 07:54:41.495882  927984 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:54:41.495945  927984 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:54:41.495999  927984 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:54:41.496010  927984 kubeadm.go:319] 
	I1229 07:54:41.496068  927984 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:54:41.496076  927984 kubeadm.go:319] 
	I1229 07:54:41.496124  927984 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:54:41.496131  927984 kubeadm.go:319] 
	I1229 07:54:41.496184  927984 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:54:41.496262  927984 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:54:41.496335  927984 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:54:41.496343  927984 kubeadm.go:319] 
	I1229 07:54:41.496427  927984 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:54:41.496507  927984 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:54:41.496515  927984 kubeadm.go:319] 
	I1229 07:54:41.496599  927984 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token p7bu0k.xd8niur5cfbd1drw \
	I1229 07:54:41.496707  927984 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 \
	I1229 07:54:41.496731  927984 kubeadm.go:319] 	--control-plane 
	I1229 07:54:41.496739  927984 kubeadm.go:319] 
	I1229 07:54:41.496849  927984 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:54:41.496859  927984 kubeadm.go:319] 
	I1229 07:54:41.496941  927984 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token p7bu0k.xd8niur5cfbd1drw \
	I1229 07:54:41.497047  927984 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 
	I1229 07:54:41.501145  927984 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:54:41.501569  927984 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:54:41.501683  927984 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:54:41.501699  927984 cni.go:84] Creating CNI manager for ""
	I1229 07:54:41.501706  927984 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:54:41.506865  927984 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:54:41.509892  927984 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:54:41.516659  927984 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:54:41.516678  927984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:54:41.530823  927984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:54:41.834241  927984 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:54:41.834407  927984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:54:41.834489  927984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-822299 minikube.k8s.io/updated_at=2025_12_29T07_54_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=no-preload-822299 minikube.k8s.io/primary=true
	I1229 07:54:42.044173  927984 ops.go:34] apiserver oom_adj: -16
	I1229 07:54:42.044285  927984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:54:42.545283  927984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:54:43.045050  927984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:54:43.545369  927984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:54:44.044584  927984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:54:44.544677  927984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:54:45.046105  927984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:54:45.544355  927984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:54:46.044423  927984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:54:46.545043  927984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:54:46.643427  927984 kubeadm.go:1114] duration metric: took 4.809080166s to wait for elevateKubeSystemPrivileges
	I1229 07:54:46.643468  927984 kubeadm.go:403] duration metric: took 17.301522824s to StartCluster
	I1229 07:54:46.643488  927984 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:54:46.643556  927984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:54:46.644180  927984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:54:46.644407  927984 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:54:46.644511  927984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:54:46.644773  927984 config.go:182] Loaded profile config "no-preload-822299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:54:46.644853  927984 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:54:46.644919  927984 addons.go:70] Setting storage-provisioner=true in profile "no-preload-822299"
	I1229 07:54:46.644939  927984 addons.go:239] Setting addon storage-provisioner=true in "no-preload-822299"
	I1229 07:54:46.644963  927984 host.go:66] Checking if "no-preload-822299" exists ...
	I1229 07:54:46.645472  927984 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:54:46.646082  927984 addons.go:70] Setting default-storageclass=true in profile "no-preload-822299"
	I1229 07:54:46.646114  927984 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-822299"
	I1229 07:54:46.646398  927984 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:54:46.650616  927984 out.go:179] * Verifying Kubernetes components...
	I1229 07:54:46.658564  927984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:54:46.677004  927984 addons.go:239] Setting addon default-storageclass=true in "no-preload-822299"
	I1229 07:54:46.677051  927984 host.go:66] Checking if "no-preload-822299" exists ...
	I1229 07:54:46.677504  927984 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:54:46.695304  927984 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:54:46.698807  927984 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:54:46.698829  927984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:54:46.698899  927984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:54:46.716580  927984 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:54:46.716601  927984 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:54:46.716663  927984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:54:46.753476  927984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:54:46.762258  927984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33823 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:54:46.881571  927984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:54:46.963115  927984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:54:47.121073  927984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:54:47.144409  927984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:54:47.628317  927984 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1229 07:54:47.630776  927984 node_ready.go:35] waiting up to 6m0s for node "no-preload-822299" to be "Ready" ...
	I1229 07:54:48.090648  927984 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:54:48.093585  927984 addons.go:530] duration metric: took 1.448716733s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1229 07:54:48.134082  927984 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-822299" context rescaled to 1 replicas
	W1229 07:54:49.634113  927984 node_ready.go:57] node "no-preload-822299" has "Ready":"False" status (will retry)
	W1229 07:54:52.133593  927984 node_ready.go:57] node "no-preload-822299" has "Ready":"False" status (will retry)
	W1229 07:54:54.634468  927984 node_ready.go:57] node "no-preload-822299" has "Ready":"False" status (will retry)
	W1229 07:54:57.134121  927984 node_ready.go:57] node "no-preload-822299" has "Ready":"False" status (will retry)
	W1229 07:54:59.634286  927984 node_ready.go:57] node "no-preload-822299" has "Ready":"False" status (will retry)
	I1229 07:55:00.634461  927984 node_ready.go:49] node "no-preload-822299" is "Ready"
	I1229 07:55:00.634493  927984 node_ready.go:38] duration metric: took 13.003690045s for node "no-preload-822299" to be "Ready" ...
	I1229 07:55:00.634513  927984 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:55:00.634590  927984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:55:00.648713  927984 api_server.go:72] duration metric: took 14.004264818s to wait for apiserver process to appear ...
	I1229 07:55:00.648742  927984 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:55:00.648763  927984 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:55:00.658454  927984 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1229 07:55:00.659758  927984 api_server.go:141] control plane version: v1.35.0
	I1229 07:55:00.659794  927984 api_server.go:131] duration metric: took 11.043762ms to wait for apiserver health ...
	I1229 07:55:00.659810  927984 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:55:00.663383  927984 system_pods.go:59] 8 kube-system pods found
	I1229 07:55:00.663427  927984 system_pods.go:61] "coredns-7d764666f9-9pqjr" [da4ef017-c306-45e2-b667-f47b190319ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:55:00.663435  927984 system_pods.go:61] "etcd-no-preload-822299" [e18f03c4-83a6-439f-a825-cdf568bbb3fd] Running
	I1229 07:55:00.663441  927984 system_pods.go:61] "kindnet-mtpz6" [47b6fe60-126e-4d25-aad6-72f6949ff1b9] Running
	I1229 07:55:00.663448  927984 system_pods.go:61] "kube-apiserver-no-preload-822299" [1e438adc-8bba-4b21-bf75-c35b34fb442f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:55:00.663453  927984 system_pods.go:61] "kube-controller-manager-no-preload-822299" [fdd6fefb-1f32-4ff1-b1c7-18c3254a0cf7] Running
	I1229 07:55:00.663459  927984 system_pods.go:61] "kube-proxy-j7ktq" [2a4725d6-5c8f-414a-bbcb-b2313e27dcf9] Running
	I1229 07:55:00.663464  927984 system_pods.go:61] "kube-scheduler-no-preload-822299" [3ea6cf1a-d03f-4c5e-8ae3-efe59307782a] Running
	I1229 07:55:00.663475  927984 system_pods.go:61] "storage-provisioner" [ac000c47-e858-4827-9d8d-b6246442cde8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:55:00.663486  927984 system_pods.go:74] duration metric: took 3.671047ms to wait for pod list to return data ...
	I1229 07:55:00.663499  927984 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:55:00.666807  927984 default_sa.go:45] found service account: "default"
	I1229 07:55:00.666903  927984 default_sa.go:55] duration metric: took 3.395779ms for default service account to be created ...
	I1229 07:55:00.666930  927984 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:55:00.670411  927984 system_pods.go:86] 8 kube-system pods found
	I1229 07:55:00.670451  927984 system_pods.go:89] "coredns-7d764666f9-9pqjr" [da4ef017-c306-45e2-b667-f47b190319ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:55:00.670458  927984 system_pods.go:89] "etcd-no-preload-822299" [e18f03c4-83a6-439f-a825-cdf568bbb3fd] Running
	I1229 07:55:00.670466  927984 system_pods.go:89] "kindnet-mtpz6" [47b6fe60-126e-4d25-aad6-72f6949ff1b9] Running
	I1229 07:55:00.670474  927984 system_pods.go:89] "kube-apiserver-no-preload-822299" [1e438adc-8bba-4b21-bf75-c35b34fb442f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:55:00.670480  927984 system_pods.go:89] "kube-controller-manager-no-preload-822299" [fdd6fefb-1f32-4ff1-b1c7-18c3254a0cf7] Running
	I1229 07:55:00.670488  927984 system_pods.go:89] "kube-proxy-j7ktq" [2a4725d6-5c8f-414a-bbcb-b2313e27dcf9] Running
	I1229 07:55:00.670526  927984 system_pods.go:89] "kube-scheduler-no-preload-822299" [3ea6cf1a-d03f-4c5e-8ae3-efe59307782a] Running
	I1229 07:55:00.670542  927984 system_pods.go:89] "storage-provisioner" [ac000c47-e858-4827-9d8d-b6246442cde8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:55:00.670573  927984 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1229 07:55:01.004139  927984 system_pods.go:86] 8 kube-system pods found
	I1229 07:55:01.004265  927984 system_pods.go:89] "coredns-7d764666f9-9pqjr" [da4ef017-c306-45e2-b667-f47b190319ef] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:55:01.004294  927984 system_pods.go:89] "etcd-no-preload-822299" [e18f03c4-83a6-439f-a825-cdf568bbb3fd] Running
	I1229 07:55:01.004322  927984 system_pods.go:89] "kindnet-mtpz6" [47b6fe60-126e-4d25-aad6-72f6949ff1b9] Running
	I1229 07:55:01.004359  927984 system_pods.go:89] "kube-apiserver-no-preload-822299" [1e438adc-8bba-4b21-bf75-c35b34fb442f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:55:01.004392  927984 system_pods.go:89] "kube-controller-manager-no-preload-822299" [fdd6fefb-1f32-4ff1-b1c7-18c3254a0cf7] Running
	I1229 07:55:01.004414  927984 system_pods.go:89] "kube-proxy-j7ktq" [2a4725d6-5c8f-414a-bbcb-b2313e27dcf9] Running
	I1229 07:55:01.004437  927984 system_pods.go:89] "kube-scheduler-no-preload-822299" [3ea6cf1a-d03f-4c5e-8ae3-efe59307782a] Running
	I1229 07:55:01.004470  927984 system_pods.go:89] "storage-provisioner" [ac000c47-e858-4827-9d8d-b6246442cde8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:55:01.350804  927984 system_pods.go:86] 8 kube-system pods found
	I1229 07:55:01.350898  927984 system_pods.go:89] "coredns-7d764666f9-9pqjr" [da4ef017-c306-45e2-b667-f47b190319ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:55:01.350925  927984 system_pods.go:89] "etcd-no-preload-822299" [e18f03c4-83a6-439f-a825-cdf568bbb3fd] Running
	I1229 07:55:01.350958  927984 system_pods.go:89] "kindnet-mtpz6" [47b6fe60-126e-4d25-aad6-72f6949ff1b9] Running
	I1229 07:55:01.350981  927984 system_pods.go:89] "kube-apiserver-no-preload-822299" [1e438adc-8bba-4b21-bf75-c35b34fb442f] Running
	I1229 07:55:01.351212  927984 system_pods.go:89] "kube-controller-manager-no-preload-822299" [fdd6fefb-1f32-4ff1-b1c7-18c3254a0cf7] Running
	I1229 07:55:01.351225  927984 system_pods.go:89] "kube-proxy-j7ktq" [2a4725d6-5c8f-414a-bbcb-b2313e27dcf9] Running
	I1229 07:55:01.351231  927984 system_pods.go:89] "kube-scheduler-no-preload-822299" [3ea6cf1a-d03f-4c5e-8ae3-efe59307782a] Running
	I1229 07:55:01.351241  927984 system_pods.go:89] "storage-provisioner" [ac000c47-e858-4827-9d8d-b6246442cde8] Running
	I1229 07:55:01.351255  927984 system_pods.go:126] duration metric: took 684.310333ms to wait for k8s-apps to be running ...
	I1229 07:55:01.351267  927984 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:55:01.351328  927984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:55:01.368433  927984 system_svc.go:56] duration metric: took 17.143166ms WaitForService to wait for kubelet
	I1229 07:55:01.368512  927984 kubeadm.go:587] duration metric: took 14.724069434s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:55:01.368541  927984 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:55:01.371729  927984 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:55:01.371761  927984 node_conditions.go:123] node cpu capacity is 2
	I1229 07:55:01.371776  927984 node_conditions.go:105] duration metric: took 3.22928ms to run NodePressure ...
	I1229 07:55:01.371789  927984 start.go:242] waiting for startup goroutines ...
	I1229 07:55:01.371797  927984 start.go:247] waiting for cluster config update ...
	I1229 07:55:01.371809  927984 start.go:256] writing updated cluster config ...
	I1229 07:55:01.372103  927984 ssh_runner.go:195] Run: rm -f paused
	I1229 07:55:01.376172  927984 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:55:01.380217  927984 pod_ready.go:83] waiting for pod "coredns-7d764666f9-9pqjr" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:55:02.385840  927984 pod_ready.go:94] pod "coredns-7d764666f9-9pqjr" is "Ready"
	I1229 07:55:02.385869  927984 pod_ready.go:86] duration metric: took 1.00562388s for pod "coredns-7d764666f9-9pqjr" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:55:02.389031  927984 pod_ready.go:83] waiting for pod "etcd-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:55:02.393799  927984 pod_ready.go:94] pod "etcd-no-preload-822299" is "Ready"
	I1229 07:55:02.393828  927984 pod_ready.go:86] duration metric: took 4.753344ms for pod "etcd-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:55:02.396257  927984 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:55:02.401532  927984 pod_ready.go:94] pod "kube-apiserver-no-preload-822299" is "Ready"
	I1229 07:55:02.401604  927984 pod_ready.go:86] duration metric: took 5.320002ms for pod "kube-apiserver-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:55:02.404017  927984 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:55:02.583752  927984 pod_ready.go:94] pod "kube-controller-manager-no-preload-822299" is "Ready"
	I1229 07:55:02.583781  927984 pod_ready.go:86] duration metric: took 179.733546ms for pod "kube-controller-manager-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:55:02.784144  927984 pod_ready.go:83] waiting for pod "kube-proxy-j7ktq" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:55:03.183694  927984 pod_ready.go:94] pod "kube-proxy-j7ktq" is "Ready"
	I1229 07:55:03.183723  927984 pod_ready.go:86] duration metric: took 399.542476ms for pod "kube-proxy-j7ktq" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:55:03.384437  927984 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:55:03.783908  927984 pod_ready.go:94] pod "kube-scheduler-no-preload-822299" is "Ready"
	I1229 07:55:03.783993  927984 pod_ready.go:86] duration metric: took 399.527396ms for pod "kube-scheduler-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:55:03.784017  927984 pod_ready.go:40] duration metric: took 2.40781059s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:55:03.843608  927984 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1229 07:55:03.846771  927984 out.go:203] 
	W1229 07:55:03.849766  927984 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1229 07:55:03.852727  927984 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:55:03.856695  927984 out.go:179] * Done! kubectl is now configured to use "no-preload-822299" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 29 07:55:00 no-preload-822299 crio[837]: time="2025-12-29T07:55:00.936542154Z" level=info msg="Created container f11967b5c13beeb5e9dd6cc6903102caa34fa4388e8660a7eec5cbab602b33d4: kube-system/coredns-7d764666f9-9pqjr/coredns" id=f012162e-a6af-463a-9db4-aa22ba1575e9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:55:00 no-preload-822299 crio[837]: time="2025-12-29T07:55:00.937555962Z" level=info msg="Starting container: f11967b5c13beeb5e9dd6cc6903102caa34fa4388e8660a7eec5cbab602b33d4" id=9dd4ba45-40c9-4af7-8eb5-a4866d75def4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:55:00 no-preload-822299 crio[837]: time="2025-12-29T07:55:00.939377126Z" level=info msg="Started container" PID=2434 containerID=f11967b5c13beeb5e9dd6cc6903102caa34fa4388e8660a7eec5cbab602b33d4 description=kube-system/coredns-7d764666f9-9pqjr/coredns id=9dd4ba45-40c9-4af7-8eb5-a4866d75def4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=75b815d13f5cfb8d3c53a7b71a6f04f6551305c23cda55ac7bace6c003be1a0c
	Dec 29 07:55:04 no-preload-822299 crio[837]: time="2025-12-29T07:55:04.370052542Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2511f101-3c6f-4dcc-8ec4-1f9dfd47642c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:55:04 no-preload-822299 crio[837]: time="2025-12-29T07:55:04.370166233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:55:04 no-preload-822299 crio[837]: time="2025-12-29T07:55:04.375686793Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:439ecebe076439457a0d8e6da14b5c2482602abbc688d685d71f541afb77f35b UID:65d3fc57-2c89-4df3-a2fb-980d8699e1c9 NetNS:/var/run/netns/b93ae6a1-28e2-4bed-9de5-fe1145af94f4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002d28088}] Aliases:map[]}"
	Dec 29 07:55:04 no-preload-822299 crio[837]: time="2025-12-29T07:55:04.375851258Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 29 07:55:04 no-preload-822299 crio[837]: time="2025-12-29T07:55:04.391602883Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:439ecebe076439457a0d8e6da14b5c2482602abbc688d685d71f541afb77f35b UID:65d3fc57-2c89-4df3-a2fb-980d8699e1c9 NetNS:/var/run/netns/b93ae6a1-28e2-4bed-9de5-fe1145af94f4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002d28088}] Aliases:map[]}"
	Dec 29 07:55:04 no-preload-822299 crio[837]: time="2025-12-29T07:55:04.391790954Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 29 07:55:04 no-preload-822299 crio[837]: time="2025-12-29T07:55:04.396222629Z" level=info msg="Ran pod sandbox 439ecebe076439457a0d8e6da14b5c2482602abbc688d685d71f541afb77f35b with infra container: default/busybox/POD" id=2511f101-3c6f-4dcc-8ec4-1f9dfd47642c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:55:04 no-preload-822299 crio[837]: time="2025-12-29T07:55:04.397791509Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=577fbced-7244-4b1c-87c8-2138404bad60 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:55:04 no-preload-822299 crio[837]: time="2025-12-29T07:55:04.397960561Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=577fbced-7244-4b1c-87c8-2138404bad60 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:55:04 no-preload-822299 crio[837]: time="2025-12-29T07:55:04.398062674Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=577fbced-7244-4b1c-87c8-2138404bad60 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:55:04 no-preload-822299 crio[837]: time="2025-12-29T07:55:04.399504349Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=24898411-23a6-4c7f-b45f-5c21dded95d4 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:55:04 no-preload-822299 crio[837]: time="2025-12-29T07:55:04.40006802Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 29 07:55:06 no-preload-822299 crio[837]: time="2025-12-29T07:55:06.389503322Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=24898411-23a6-4c7f-b45f-5c21dded95d4 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:55:06 no-preload-822299 crio[837]: time="2025-12-29T07:55:06.390192294Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b5198265-8252-4aa1-a4f3-823632380a07 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:55:06 no-preload-822299 crio[837]: time="2025-12-29T07:55:06.392143437Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=00b906fd-2a7a-4c7e-b6dd-a79ed75cb135 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:55:06 no-preload-822299 crio[837]: time="2025-12-29T07:55:06.3994155Z" level=info msg="Creating container: default/busybox/busybox" id=91e2da63-5611-4fcd-94f9-3442ceeec9f8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:55:06 no-preload-822299 crio[837]: time="2025-12-29T07:55:06.399565893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:55:06 no-preload-822299 crio[837]: time="2025-12-29T07:55:06.404671189Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:55:06 no-preload-822299 crio[837]: time="2025-12-29T07:55:06.40526321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:55:06 no-preload-822299 crio[837]: time="2025-12-29T07:55:06.425504882Z" level=info msg="Created container a5404e5353f50d971afb807d57bab0ada5f77d71eafc4f0ee2b2e82d6007a15a: default/busybox/busybox" id=91e2da63-5611-4fcd-94f9-3442ceeec9f8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:55:06 no-preload-822299 crio[837]: time="2025-12-29T07:55:06.426420843Z" level=info msg="Starting container: a5404e5353f50d971afb807d57bab0ada5f77d71eafc4f0ee2b2e82d6007a15a" id=ea9af38a-de74-446a-9411-2d419abe51f5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:55:06 no-preload-822299 crio[837]: time="2025-12-29T07:55:06.428303973Z" level=info msg="Started container" PID=2493 containerID=a5404e5353f50d971afb807d57bab0ada5f77d71eafc4f0ee2b2e82d6007a15a description=default/busybox/busybox id=ea9af38a-de74-446a-9411-2d419abe51f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=439ecebe076439457a0d8e6da14b5c2482602abbc688d685d71f541afb77f35b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a5404e5353f50       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   439ecebe07643       busybox                                     default
	f11967b5c13be       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      13 seconds ago      Running             coredns                   0                   75b815d13f5cf       coredns-7d764666f9-9pqjr                    kube-system
	14ba76c6eab82       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   f4594f4d64d85       storage-provisioner                         kube-system
	e1b5dd9d409c9       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   2576ce432daa6       kindnet-mtpz6                               kube-system
	3c8e1fe7134bd       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      27 seconds ago      Running             kube-proxy                0                   a7a2bd3088bf0       kube-proxy-j7ktq                            kube-system
	7e2589df28ef8       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      39 seconds ago      Running             kube-scheduler            0                   e1fbbad66ac73       kube-scheduler-no-preload-822299            kube-system
	bd073484dc1b3       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      39 seconds ago      Running             kube-controller-manager   0                   8b00f6768d4fc       kube-controller-manager-no-preload-822299   kube-system
	d7066ad40659b       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      39 seconds ago      Running             etcd                      0                   b57ed18a621c8       etcd-no-preload-822299                      kube-system
	2b45858dba979       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      39 seconds ago      Running             kube-apiserver            0                   74dbf899a52a0       kube-apiserver-no-preload-822299            kube-system
	
	
	==> coredns [f11967b5c13beeb5e9dd6cc6903102caa34fa4388e8660a7eec5cbab602b33d4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:35848 - 27643 "HINFO IN 4435977901488328977.8877806043631366036. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0234657s
	
	
	==> describe nodes <==
	Name:               no-preload-822299
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-822299
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=no-preload-822299
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_54_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:54:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-822299
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:55:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:55:11 +0000   Mon, 29 Dec 2025 07:54:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:55:11 +0000   Mon, 29 Dec 2025 07:54:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:55:11 +0000   Mon, 29 Dec 2025 07:54:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:55:11 +0000   Mon, 29 Dec 2025 07:55:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-822299
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                19ace3de-03e9-4b9a-9295-4970de68f6fd
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-9pqjr                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-no-preload-822299                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-mtpz6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-822299             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-822299    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-j7ktq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-822299             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node no-preload-822299 event: Registered Node no-preload-822299 in Controller
	
	
	==> dmesg <==
	[Dec29 07:25] overlayfs: idmapped layers are currently not supported
	[Dec29 07:26] overlayfs: idmapped layers are currently not supported
	[Dec29 07:29] overlayfs: idmapped layers are currently not supported
	[Dec29 07:30] overlayfs: idmapped layers are currently not supported
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	[Dec29 07:54] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d7066ad40659b805811c38f31e9dc01abc6822ef330632ac162e542791730dd6] <==
	{"level":"info","ts":"2025-12-29T07:54:35.636342Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:54:36.196833Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-29T07:54:36.196967Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-29T07:54:36.197055Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-29T07:54:36.197107Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:54:36.197159Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:54:36.199371Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:54:36.199442Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:54:36.199483Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-29T07:54:36.199518Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:54:36.201538Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:54:36.204998Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-822299 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:54:36.205072Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:54:36.206073Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:54:36.206231Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:54:36.206291Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:54:36.206355Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T07:54:36.206445Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-29T07:54:36.206491Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:54:36.207437Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:54:36.208845Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:54:36.213392Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-29T07:54:36.214812Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:54:36.216427Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:54:36.224820Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 07:55:14 up  4:37,  0 user,  load average: 2.26, 1.96, 2.32
	Linux no-preload-822299 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e1b5dd9d409c92a44d284149c407c1106be6bf43fac8e3ac37b89200b7d9d2df] <==
	I1229 07:54:49.860236       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:54:49.860475       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:54:49.860598       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:54:49.860618       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:54:49.860630       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:54:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:54:50.061538       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:54:50.061819       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:54:50.152865       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:54:50.153126       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:54:50.353900       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:54:50.353933       1 metrics.go:72] Registering metrics
	I1229 07:54:50.353994       1 controller.go:711] "Syncing nftables rules"
	I1229 07:55:00.062011       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:55:00.062087       1 main.go:301] handling current node
	I1229 07:55:10.063663       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:55:10.063961       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2b45858dba9793cce227071edc9f52b6f665db38b1637d14a433be6ce1cd9614] <==
	I1229 07:54:38.365423       1 policy_source.go:248] refreshing policies
	E1229 07:54:38.395282       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1229 07:54:38.427286       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:54:38.462280       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:54:38.462885       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1229 07:54:38.494296       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:54:38.563801       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:54:39.021928       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1229 07:54:39.029810       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1229 07:54:39.029839       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:54:39.736290       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:54:39.788254       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:54:39.935224       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1229 07:54:39.943161       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1229 07:54:39.944299       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:54:39.949469       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:54:40.317292       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:54:40.902951       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:54:40.919686       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1229 07:54:40.932346       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1229 07:54:45.774279       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:54:45.780923       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:54:45.871903       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:54:46.330199       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1229 07:55:13.204304       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:45654: use of closed network connection
	
	
	==> kube-controller-manager [bd073484dc1b3b73ce243617f415b87bf3c764b788dff4c895da799e41881f9b] <==
	I1229 07:54:45.291453       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.296300       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.296370       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.296454       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.296497       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.296663       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.292493       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.297047       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1229 07:54:45.297143       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-822299"
	I1229 07:54:45.297228       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1229 07:54:45.292509       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.294376       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.294889       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.300287       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.300521       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.301012       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.301049       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.291495       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.302286       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.342517       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-822299" podCIDRs=["10.244.0.0/24"]
	I1229 07:54:45.373771       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.392224       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:45.392263       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:54:45.392269       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:55:05.303282       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [3c8e1fe7134bda110d0926f94b58237882fa36cb796eb49bd78f0bad7dd7af68] <==
	I1229 07:54:47.066083       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:54:47.183049       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:54:47.288950       1 shared_informer.go:377] "Caches are synced"
	I1229 07:54:47.288981       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1229 07:54:47.289078       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:54:47.320550       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:54:47.320607       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:54:47.326092       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:54:47.326423       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:54:47.326439       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:54:47.327782       1 config.go:200] "Starting service config controller"
	I1229 07:54:47.327792       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:54:47.327809       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:54:47.327813       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:54:47.327823       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:54:47.327826       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:54:47.328461       1 config.go:309] "Starting node config controller"
	I1229 07:54:47.328468       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:54:47.328474       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:54:47.427948       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:54:47.427981       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:54:47.428017       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7e2589df28ef890fd919cca5fbac19fcea977db02d83f0203f93521edf714c6e] <==
	E1229 07:54:38.285819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:54:38.286462       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:54:38.286584       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:54:38.286701       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:54:38.286801       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:54:38.286931       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:54:38.287175       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:54:38.287281       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:54:38.287381       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:54:38.287469       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:54:38.287652       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:54:38.287726       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:54:38.293046       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:54:38.301118       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:54:38.302537       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:54:38.302638       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:54:38.302734       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:54:39.180969       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:54:39.233531       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:54:39.272504       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:54:39.290818       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:54:39.295509       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:54:39.347008       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:54:39.507604       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1229 07:54:42.230513       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:54:46 no-preload-822299 kubelet[1937]: W1229 07:54:46.778482    1937 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/crio-a7a2bd3088bf0e6f072ebc2fb4ccbc1cfa9ee603f833e07a82b7b0efdb9382a4 WatchSource:0}: Error finding container a7a2bd3088bf0e6f072ebc2fb4ccbc1cfa9ee603f833e07a82b7b0efdb9382a4: Status 404 returned error can't find the container with id a7a2bd3088bf0e6f072ebc2fb4ccbc1cfa9ee603f833e07a82b7b0efdb9382a4
	Dec 29 07:54:46 no-preload-822299 kubelet[1937]: W1229 07:54:46.784963    1937 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/crio-2576ce432daa6a7c60da0b2b355924284d74ecab400da4c9716b9e1bb2107581 WatchSource:0}: Error finding container 2576ce432daa6a7c60da0b2b355924284d74ecab400da4c9716b9e1bb2107581: Status 404 returned error can't find the container with id 2576ce432daa6a7c60da0b2b355924284d74ecab400da4c9716b9e1bb2107581
	Dec 29 07:54:47 no-preload-822299 kubelet[1937]: E1229 07:54:47.633639    1937 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-822299" containerName="kube-controller-manager"
	Dec 29 07:54:47 no-preload-822299 kubelet[1937]: I1229 07:54:47.673762    1937 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-j7ktq" podStartSLOduration=1.67374469 podStartE2EDuration="1.67374469s" podCreationTimestamp="2025-12-29 07:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:54:47.014184777 +0000 UTC m=+6.280530624" watchObservedRunningTime="2025-12-29 07:54:47.67374469 +0000 UTC m=+6.940090554"
	Dec 29 07:54:48 no-preload-822299 kubelet[1937]: E1229 07:54:48.274105    1937 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-822299" containerName="kube-scheduler"
	Dec 29 07:54:50 no-preload-822299 kubelet[1937]: I1229 07:54:50.905271    1937 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-mtpz6" podStartSLOduration=1.91940357 podStartE2EDuration="4.905256427s" podCreationTimestamp="2025-12-29 07:54:46 +0000 UTC" firstStartedPulling="2025-12-29 07:54:46.803367609 +0000 UTC m=+6.069713448" lastFinishedPulling="2025-12-29 07:54:49.789220458 +0000 UTC m=+9.055566305" observedRunningTime="2025-12-29 07:54:50.033395551 +0000 UTC m=+9.299741398" watchObservedRunningTime="2025-12-29 07:54:50.905256427 +0000 UTC m=+10.171602266"
	Dec 29 07:54:50 no-preload-822299 kubelet[1937]: E1229 07:54:50.984546    1937 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-822299" containerName="kube-apiserver"
	Dec 29 07:54:52 no-preload-822299 kubelet[1937]: E1229 07:54:52.504830    1937 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-822299" containerName="etcd"
	Dec 29 07:54:57 no-preload-822299 kubelet[1937]: E1229 07:54:57.642338    1937 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-822299" containerName="kube-controller-manager"
	Dec 29 07:54:58 no-preload-822299 kubelet[1937]: E1229 07:54:58.282268    1937 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-822299" containerName="kube-scheduler"
	Dec 29 07:55:00 no-preload-822299 kubelet[1937]: I1229 07:55:00.465934    1937 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 29 07:55:00 no-preload-822299 kubelet[1937]: I1229 07:55:00.609023    1937 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qkfj\" (UniqueName: \"kubernetes.io/projected/ac000c47-e858-4827-9d8d-b6246442cde8-kube-api-access-5qkfj\") pod \"storage-provisioner\" (UID: \"ac000c47-e858-4827-9d8d-b6246442cde8\") " pod="kube-system/storage-provisioner"
	Dec 29 07:55:00 no-preload-822299 kubelet[1937]: I1229 07:55:00.609255    1937 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da4ef017-c306-45e2-b667-f47b190319ef-config-volume\") pod \"coredns-7d764666f9-9pqjr\" (UID: \"da4ef017-c306-45e2-b667-f47b190319ef\") " pod="kube-system/coredns-7d764666f9-9pqjr"
	Dec 29 07:55:00 no-preload-822299 kubelet[1937]: I1229 07:55:00.609375    1937 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ac000c47-e858-4827-9d8d-b6246442cde8-tmp\") pod \"storage-provisioner\" (UID: \"ac000c47-e858-4827-9d8d-b6246442cde8\") " pod="kube-system/storage-provisioner"
	Dec 29 07:55:00 no-preload-822299 kubelet[1937]: I1229 07:55:00.609413    1937 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdlp\" (UniqueName: \"kubernetes.io/projected/da4ef017-c306-45e2-b667-f47b190319ef-kube-api-access-2rdlp\") pod \"coredns-7d764666f9-9pqjr\" (UID: \"da4ef017-c306-45e2-b667-f47b190319ef\") " pod="kube-system/coredns-7d764666f9-9pqjr"
	Dec 29 07:55:00 no-preload-822299 kubelet[1937]: W1229 07:55:00.833383    1937 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/crio-f4594f4d64d853b484bbfe68140dbdede31172d144c6ec6f5602f117884bbfd5 WatchSource:0}: Error finding container f4594f4d64d853b484bbfe68140dbdede31172d144c6ec6f5602f117884bbfd5: Status 404 returned error can't find the container with id f4594f4d64d853b484bbfe68140dbdede31172d144c6ec6f5602f117884bbfd5
	Dec 29 07:55:00 no-preload-822299 kubelet[1937]: W1229 07:55:00.865851    1937 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/crio-75b815d13f5cfb8d3c53a7b71a6f04f6551305c23cda55ac7bace6c003be1a0c WatchSource:0}: Error finding container 75b815d13f5cfb8d3c53a7b71a6f04f6551305c23cda55ac7bace6c003be1a0c: Status 404 returned error can't find the container with id 75b815d13f5cfb8d3c53a7b71a6f04f6551305c23cda55ac7bace6c003be1a0c
	Dec 29 07:55:01 no-preload-822299 kubelet[1937]: E1229 07:55:01.000184    1937 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-822299" containerName="kube-apiserver"
	Dec 29 07:55:01 no-preload-822299 kubelet[1937]: E1229 07:55:01.041204    1937 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9pqjr" containerName="coredns"
	Dec 29 07:55:01 no-preload-822299 kubelet[1937]: I1229 07:55:01.086811    1937 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.086793599 podStartE2EDuration="13.086793599s" podCreationTimestamp="2025-12-29 07:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:55:01.065919857 +0000 UTC m=+20.332265696" watchObservedRunningTime="2025-12-29 07:55:01.086793599 +0000 UTC m=+20.353139438"
	Dec 29 07:55:02 no-preload-822299 kubelet[1937]: E1229 07:55:02.043213    1937 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9pqjr" containerName="coredns"
	Dec 29 07:55:02 no-preload-822299 kubelet[1937]: I1229 07:55:02.059515    1937 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9pqjr" podStartSLOduration=16.059498489 podStartE2EDuration="16.059498489s" podCreationTimestamp="2025-12-29 07:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:55:01.087990997 +0000 UTC m=+20.354336861" watchObservedRunningTime="2025-12-29 07:55:02.059498489 +0000 UTC m=+21.325844328"
	Dec 29 07:55:03 no-preload-822299 kubelet[1937]: E1229 07:55:03.045409    1937 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9pqjr" containerName="coredns"
	Dec 29 07:55:04 no-preload-822299 kubelet[1937]: I1229 07:55:04.137050    1937 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8rg2\" (UniqueName: \"kubernetes.io/projected/65d3fc57-2c89-4df3-a2fb-980d8699e1c9-kube-api-access-d8rg2\") pod \"busybox\" (UID: \"65d3fc57-2c89-4df3-a2fb-980d8699e1c9\") " pod="default/busybox"
	Dec 29 07:55:04 no-preload-822299 kubelet[1937]: W1229 07:55:04.394262    1937 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/crio-439ecebe076439457a0d8e6da14b5c2482602abbc688d685d71f541afb77f35b WatchSource:0}: Error finding container 439ecebe076439457a0d8e6da14b5c2482602abbc688d685d71f541afb77f35b: Status 404 returned error can't find the container with id 439ecebe076439457a0d8e6da14b5c2482602abbc688d685d71f541afb77f35b
	
	
	==> storage-provisioner [14ba76c6eab82484b24d182e535a7849cb65978e488e706d06723c8ccafc65b1] <==
	I1229 07:55:00.932549       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:55:00.954469       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:55:00.954542       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:55:00.961266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:00.970577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:55:00.970834       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:55:00.972101       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-822299_478a1f2c-9030-45ac-a0ed-9f98f67195e8!
	I1229 07:55:00.972269       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40fb3eac-348b-44c6-ae27-7f5b53dd11a0", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-822299_478a1f2c-9030-45ac-a0ed-9f98f67195e8 became leader
	W1229 07:55:00.975582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:00.997758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:55:01.074581       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-822299_478a1f2c-9030-45ac-a0ed-9f98f67195e8!
	W1229 07:55:03.002830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:03.011312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:05.014655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:05.024119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:07.027725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:07.035629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:09.038664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:09.045849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:11.052047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:11.058346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:13.061726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:13.067239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:15.070803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:55:15.078564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-822299 -n no-preload-822299
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-822299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-822299 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-822299 --alsologtostderr -v=1: exit status 80 (2.593572995s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-822299 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:56:26.630560  935176 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:56:26.630773  935176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:56:26.630805  935176 out.go:374] Setting ErrFile to fd 2...
	I1229 07:56:26.630826  935176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:56:26.631117  935176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:56:26.631408  935176 out.go:368] Setting JSON to false
	I1229 07:56:26.631464  935176 mustload.go:66] Loading cluster: no-preload-822299
	I1229 07:56:26.631889  935176 config.go:182] Loaded profile config "no-preload-822299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:56:26.632379  935176 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:56:26.649626  935176 host.go:66] Checking if "no-preload-822299" exists ...
	I1229 07:56:26.649941  935176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:56:26.715161  935176 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-29 07:56:26.700936644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:56:26.716007  935176 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766979747-22353/minikube-v1.37.0-1766979747-22353-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766979747-22353-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-822299 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1229 07:56:26.723100  935176 out.go:179] * Pausing node no-preload-822299 ... 
	I1229 07:56:26.727161  935176 host.go:66] Checking if "no-preload-822299" exists ...
	I1229 07:56:26.727582  935176 ssh_runner.go:195] Run: systemctl --version
	I1229 07:56:26.727636  935176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:56:26.750433  935176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:56:26.876134  935176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:56:26.890084  935176 pause.go:52] kubelet running: true
	I1229 07:56:26.890167  935176 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:56:27.148432  935176 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:56:27.148513  935176 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:56:27.219220  935176 cri.go:96] found id: "1ac564eb3b678f5d18e79787bddd400f6078af4d4aa2ee610fd521cfc8c5ae9f"
	I1229 07:56:27.219243  935176 cri.go:96] found id: "207de0eff4ea65084cae90279e02da02d1295619882838059e4fbfcf26208f18"
	I1229 07:56:27.219247  935176 cri.go:96] found id: "027c0db3fe16212396a84ea9fbe3ce1fcded7ccde2a4c9332868ffe62e4474ed"
	I1229 07:56:27.219252  935176 cri.go:96] found id: "8c7aaa2e263d66f365a99d3972aed68fc1fca62757f4eb3b373fe85d0cbf3f11"
	I1229 07:56:27.219256  935176 cri.go:96] found id: "4b778416881b60e07179255537ddb5371244f603b8bc5d645105c9f71226af14"
	I1229 07:56:27.219260  935176 cri.go:96] found id: "6032c154ef8167005227b5440b39d68c2447ec5f8c0c91fd788340c8ee16c989"
	I1229 07:56:27.219263  935176 cri.go:96] found id: "41a7706d81e9c15097d3a00866892e44a875935df4795885b8436c4c59bb8fa8"
	I1229 07:56:27.219266  935176 cri.go:96] found id: "0089246ff4de942bd19f5b3b44618b8533216e0a07a0df9f128c470bc1cfe64d"
	I1229 07:56:27.219287  935176 cri.go:96] found id: "b07cb6606d4da3395c3f8cff43125cc60ee39b88d0e4ebd07629beb28ba19a59"
	I1229 07:56:27.219298  935176 cri.go:96] found id: "242d9128050d7dc895a326106af73fe7da1e4aad58427230a10b1e47ec73cfd9"
	I1229 07:56:27.219302  935176 cri.go:96] found id: "8b4305f000816e5647e84ad1081d4c1e587f0138a84b81bb87dd316602494d5c"
	I1229 07:56:27.219306  935176 cri.go:96] found id: ""
	I1229 07:56:27.219367  935176 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:56:27.230656  935176 retry.go:84] will retry after 100ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:56:27Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:56:27.363064  935176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:56:27.378983  935176 pause.go:52] kubelet running: false
	I1229 07:56:27.379062  935176 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:56:27.570250  935176 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:56:27.570323  935176 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:56:27.643355  935176 cri.go:96] found id: "1ac564eb3b678f5d18e79787bddd400f6078af4d4aa2ee610fd521cfc8c5ae9f"
	I1229 07:56:27.643379  935176 cri.go:96] found id: "207de0eff4ea65084cae90279e02da02d1295619882838059e4fbfcf26208f18"
	I1229 07:56:27.643391  935176 cri.go:96] found id: "027c0db3fe16212396a84ea9fbe3ce1fcded7ccde2a4c9332868ffe62e4474ed"
	I1229 07:56:27.643397  935176 cri.go:96] found id: "8c7aaa2e263d66f365a99d3972aed68fc1fca62757f4eb3b373fe85d0cbf3f11"
	I1229 07:56:27.643401  935176 cri.go:96] found id: "4b778416881b60e07179255537ddb5371244f603b8bc5d645105c9f71226af14"
	I1229 07:56:27.643404  935176 cri.go:96] found id: "6032c154ef8167005227b5440b39d68c2447ec5f8c0c91fd788340c8ee16c989"
	I1229 07:56:27.643408  935176 cri.go:96] found id: "41a7706d81e9c15097d3a00866892e44a875935df4795885b8436c4c59bb8fa8"
	I1229 07:56:27.643411  935176 cri.go:96] found id: "0089246ff4de942bd19f5b3b44618b8533216e0a07a0df9f128c470bc1cfe64d"
	I1229 07:56:27.643414  935176 cri.go:96] found id: "b07cb6606d4da3395c3f8cff43125cc60ee39b88d0e4ebd07629beb28ba19a59"
	I1229 07:56:27.643420  935176 cri.go:96] found id: "242d9128050d7dc895a326106af73fe7da1e4aad58427230a10b1e47ec73cfd9"
	I1229 07:56:27.643424  935176 cri.go:96] found id: "8b4305f000816e5647e84ad1081d4c1e587f0138a84b81bb87dd316602494d5c"
	I1229 07:56:27.643427  935176 cri.go:96] found id: ""
	I1229 07:56:27.643481  935176 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:56:28.048341  935176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:56:28.062728  935176 pause.go:52] kubelet running: false
	I1229 07:56:28.062816  935176 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:56:28.250753  935176 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:56:28.250830  935176 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:56:28.322574  935176 cri.go:96] found id: "1ac564eb3b678f5d18e79787bddd400f6078af4d4aa2ee610fd521cfc8c5ae9f"
	I1229 07:56:28.322600  935176 cri.go:96] found id: "207de0eff4ea65084cae90279e02da02d1295619882838059e4fbfcf26208f18"
	I1229 07:56:28.322606  935176 cri.go:96] found id: "027c0db3fe16212396a84ea9fbe3ce1fcded7ccde2a4c9332868ffe62e4474ed"
	I1229 07:56:28.322610  935176 cri.go:96] found id: "8c7aaa2e263d66f365a99d3972aed68fc1fca62757f4eb3b373fe85d0cbf3f11"
	I1229 07:56:28.322613  935176 cri.go:96] found id: "4b778416881b60e07179255537ddb5371244f603b8bc5d645105c9f71226af14"
	I1229 07:56:28.322617  935176 cri.go:96] found id: "6032c154ef8167005227b5440b39d68c2447ec5f8c0c91fd788340c8ee16c989"
	I1229 07:56:28.322620  935176 cri.go:96] found id: "41a7706d81e9c15097d3a00866892e44a875935df4795885b8436c4c59bb8fa8"
	I1229 07:56:28.322623  935176 cri.go:96] found id: "0089246ff4de942bd19f5b3b44618b8533216e0a07a0df9f128c470bc1cfe64d"
	I1229 07:56:28.322626  935176 cri.go:96] found id: "b07cb6606d4da3395c3f8cff43125cc60ee39b88d0e4ebd07629beb28ba19a59"
	I1229 07:56:28.322632  935176 cri.go:96] found id: "242d9128050d7dc895a326106af73fe7da1e4aad58427230a10b1e47ec73cfd9"
	I1229 07:56:28.322636  935176 cri.go:96] found id: "8b4305f000816e5647e84ad1081d4c1e587f0138a84b81bb87dd316602494d5c"
	I1229 07:56:28.322638  935176 cri.go:96] found id: ""
	I1229 07:56:28.322688  935176 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:56:28.869654  935176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:56:28.884743  935176 pause.go:52] kubelet running: false
	I1229 07:56:28.884834  935176 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:56:29.066532  935176 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:56:29.066613  935176 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:56:29.135295  935176 cri.go:96] found id: "1ac564eb3b678f5d18e79787bddd400f6078af4d4aa2ee610fd521cfc8c5ae9f"
	I1229 07:56:29.135334  935176 cri.go:96] found id: "207de0eff4ea65084cae90279e02da02d1295619882838059e4fbfcf26208f18"
	I1229 07:56:29.135365  935176 cri.go:96] found id: "027c0db3fe16212396a84ea9fbe3ce1fcded7ccde2a4c9332868ffe62e4474ed"
	I1229 07:56:29.135387  935176 cri.go:96] found id: "8c7aaa2e263d66f365a99d3972aed68fc1fca62757f4eb3b373fe85d0cbf3f11"
	I1229 07:56:29.135394  935176 cri.go:96] found id: "4b778416881b60e07179255537ddb5371244f603b8bc5d645105c9f71226af14"
	I1229 07:56:29.135408  935176 cri.go:96] found id: "6032c154ef8167005227b5440b39d68c2447ec5f8c0c91fd788340c8ee16c989"
	I1229 07:56:29.135412  935176 cri.go:96] found id: "41a7706d81e9c15097d3a00866892e44a875935df4795885b8436c4c59bb8fa8"
	I1229 07:56:29.135415  935176 cri.go:96] found id: "0089246ff4de942bd19f5b3b44618b8533216e0a07a0df9f128c470bc1cfe64d"
	I1229 07:56:29.135418  935176 cri.go:96] found id: "b07cb6606d4da3395c3f8cff43125cc60ee39b88d0e4ebd07629beb28ba19a59"
	I1229 07:56:29.135429  935176 cri.go:96] found id: "242d9128050d7dc895a326106af73fe7da1e4aad58427230a10b1e47ec73cfd9"
	I1229 07:56:29.135437  935176 cri.go:96] found id: "8b4305f000816e5647e84ad1081d4c1e587f0138a84b81bb87dd316602494d5c"
	I1229 07:56:29.135440  935176 cri.go:96] found id: ""
	I1229 07:56:29.135496  935176 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:56:29.150881  935176 out.go:203] 
	W1229 07:56:29.154155  935176 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:56:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:56:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:56:29.154184  935176 out.go:285] * 
	* 
	W1229 07:56:29.158057  935176 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:56:29.160553  935176 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-822299 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-822299
helpers_test.go:244: (dbg) docker inspect no-preload-822299:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70",
	        "Created": "2025-12-29T07:54:12.225052256Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 932616,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:55:28.157173327Z",
	            "FinishedAt": "2025-12-29T07:55:27.357359899Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/hostname",
	        "HostsPath": "/var/lib/docker/containers/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/hosts",
	        "LogPath": "/var/lib/docker/containers/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70-json.log",
	        "Name": "/no-preload-822299",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-822299:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-822299",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70",
	                "LowerDir": "/var/lib/docker/overlay2/f9c14dc7b3402416cae82c68b5d58f6bb280c659b388cc078b450f2d2e938cb3-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9c14dc7b3402416cae82c68b5d58f6bb280c659b388cc078b450f2d2e938cb3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9c14dc7b3402416cae82c68b5d58f6bb280c659b388cc078b450f2d2e938cb3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9c14dc7b3402416cae82c68b5d58f6bb280c659b388cc078b450f2d2e938cb3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-822299",
	                "Source": "/var/lib/docker/volumes/no-preload-822299/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-822299",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-822299",
	                "name.minikube.sigs.k8s.io": "no-preload-822299",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6420e632940d16d15f91e4e24f596a1c7c12da8e37d3a8b47a52e6be3b02b051",
	            "SandboxKey": "/var/run/docker/netns/6420e632940d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-822299": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:60:2a:94:cb:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9fa6b3c21ae85fa8b96c74beecaa8ac1deb81e91b1a2aaf65c5b20de7bb77be6",
	                    "EndpointID": "8bd798ff9ffc62daf17c43e864da6011cce17ea5daacc261af1869bd7af75e76",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-822299",
	                        "199b371fef13"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-822299 -n no-preload-822299
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-822299 -n no-preload-822299: exit status 2 (332.384198ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-822299 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-822299 logs -n 25: (1.294356384s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │ 29 Dec 25 07:46 UTC │
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ delete  │ -p cert-expiration-853545                                                                                                                                                                                                                     │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ start   │ -p force-systemd-flag-122604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-122604 │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │                     │
	│ delete  │ -p force-systemd-env-002562                                                                                                                                                                                                                   │ force-systemd-env-002562  │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p cert-options-600463 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ cert-options-600463 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ -p cert-options-600463 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ delete  │ -p cert-options-600463                                                                                                                                                                                                                        │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-512867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │                     │
	│ stop    │ -p old-k8s-version-512867 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │ 29 Dec 25 07:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-512867 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ image   │ old-k8s-version-512867 image list --format=json                                                                                                                                                                                               │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ pause   │ -p old-k8s-version-512867 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │                     │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-822299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │                     │
	│ stop    │ -p no-preload-822299 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable dashboard -p no-preload-822299 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:56 UTC │
	│ image   │ no-preload-822299 image list --format=json                                                                                                                                                                                                    │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ pause   │ -p no-preload-822299 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:55:27
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:55:27.885161  932489 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:55:27.885288  932489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:55:27.885299  932489 out.go:374] Setting ErrFile to fd 2...
	I1229 07:55:27.885305  932489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:55:27.885657  932489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:55:27.886107  932489 out.go:368] Setting JSON to false
	I1229 07:55:27.887263  932489 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16680,"bootTime":1766978248,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:55:27.887351  932489 start.go:143] virtualization:  
	I1229 07:55:27.890790  932489 out.go:179] * [no-preload-822299] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:55:27.894610  932489 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:55:27.894709  932489 notify.go:221] Checking for updates...
	I1229 07:55:27.900561  932489 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:55:27.903485  932489 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:55:27.906285  932489 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:55:27.909027  932489 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:55:27.911861  932489 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:55:27.915098  932489 config.go:182] Loaded profile config "no-preload-822299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:55:27.915699  932489 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:55:27.942032  932489 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:55:27.942154  932489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:55:28.006165  932489 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:55:27.994615732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:55:28.006312  932489 docker.go:319] overlay module found
	I1229 07:55:28.011541  932489 out.go:179] * Using the docker driver based on existing profile
	I1229 07:55:28.014545  932489 start.go:309] selected driver: docker
	I1229 07:55:28.014574  932489 start.go:928] validating driver "docker" against &{Name:no-preload-822299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:55:28.014689  932489 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:55:28.015433  932489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:55:28.069888  932489 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:55:28.060826981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:55:28.070216  932489 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:55:28.070256  932489 cni.go:84] Creating CNI manager for ""
	I1229 07:55:28.070327  932489 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:55:28.070371  932489 start.go:353] cluster config:
	{Name:no-preload-822299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:55:28.073618  932489 out.go:179] * Starting "no-preload-822299" primary control-plane node in "no-preload-822299" cluster
	I1229 07:55:28.076397  932489 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:55:28.079248  932489 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:55:28.082111  932489 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:55:28.082188  932489 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:55:28.082260  932489 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/config.json ...
	I1229 07:55:28.082530  932489 cache.go:107] acquiring lock: {Name:mk77910156d27401bb7909a31b93ec051f9aa764 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082617  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1229 07:55:28.082629  932489 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.293µs
	I1229 07:55:28.082643  932489 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1229 07:55:28.082656  932489 cache.go:107] acquiring lock: {Name:mk7acdd26b6f5f36232f222f1b8b250f3e55a098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082690  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1229 07:55:28.082700  932489 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 46.474µs
	I1229 07:55:28.082707  932489 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1229 07:55:28.082722  932489 cache.go:107] acquiring lock: {Name:mk6c222e06ab2a6eba07c71c6bbf60553a466256 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082754  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1229 07:55:28.082764  932489 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 43.315µs
	I1229 07:55:28.082770  932489 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1229 07:55:28.082780  932489 cache.go:107] acquiring lock: {Name:mkc4b5fee2fc8ad3886c6f422b410c403d3cd271 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082811  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1229 07:55:28.082819  932489 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 40.526µs
	I1229 07:55:28.082825  932489 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1229 07:55:28.082843  932489 cache.go:107] acquiring lock: {Name:mkfe408e55325d204df091c798ec530361ec90e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082873  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1229 07:55:28.082882  932489 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 48.337µs
	I1229 07:55:28.082888  932489 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1229 07:55:28.082897  932489 cache.go:107] acquiring lock: {Name:mk7e5915b6ec72206d698779b421c83aee716ab1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082926  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1229 07:55:28.082936  932489 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.877µs
	I1229 07:55:28.082942  932489 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1229 07:55:28.082958  932489 cache.go:107] acquiring lock: {Name:mk42d4be1fc1ab08338ed03f966f32cf4e137a60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082994  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1229 07:55:28.083003  932489 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 47.557µs
	I1229 07:55:28.083009  932489 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1229 07:55:28.083027  932489 cache.go:107] acquiring lock: {Name:mkb59ef52ccfb9558cc0c53f2d458d37076da56a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.083057  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1229 07:55:28.083066  932489 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 40.239µs
	I1229 07:55:28.083072  932489 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1229 07:55:28.083078  932489 cache.go:87] Successfully saved all images to host disk.
	I1229 07:55:28.102038  932489 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:55:28.102060  932489 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:55:28.102075  932489 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:55:28.102108  932489 start.go:360] acquireMachinesLock for no-preload-822299: {Name:mk8778cb7dc7747a72362114977db1c0dacca8bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.102167  932489 start.go:364] duration metric: took 38.097µs to acquireMachinesLock for "no-preload-822299"
	I1229 07:55:28.102191  932489 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:55:28.102201  932489 fix.go:54] fixHost starting: 
	I1229 07:55:28.102464  932489 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:55:28.119679  932489 fix.go:112] recreateIfNeeded on no-preload-822299: state=Stopped err=<nil>
	W1229 07:55:28.119707  932489 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:55:28.125046  932489 out.go:252] * Restarting existing docker container for "no-preload-822299" ...
	I1229 07:55:28.125139  932489 cli_runner.go:164] Run: docker start no-preload-822299
	I1229 07:55:28.378367  932489 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:55:28.401570  932489 kic.go:430] container "no-preload-822299" state is running.
	I1229 07:55:28.401982  932489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-822299
	I1229 07:55:28.427152  932489 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/config.json ...
	I1229 07:55:28.427374  932489 machine.go:94] provisionDockerMachine start ...
	I1229 07:55:28.427436  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:28.458424  932489 main.go:144] libmachine: Using SSH client type: native
	I1229 07:55:28.459006  932489 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I1229 07:55:28.459023  932489 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:55:28.459721  932489 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44854->127.0.0.1:33828: read: connection reset by peer
	I1229 07:55:31.620342  932489 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-822299
	
	I1229 07:55:31.620367  932489 ubuntu.go:182] provisioning hostname "no-preload-822299"
	I1229 07:55:31.620441  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:31.638176  932489 main.go:144] libmachine: Using SSH client type: native
	I1229 07:55:31.638493  932489 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I1229 07:55:31.638512  932489 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-822299 && echo "no-preload-822299" | sudo tee /etc/hostname
	I1229 07:55:31.802797  932489 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-822299
	
	I1229 07:55:31.802885  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:31.820688  932489 main.go:144] libmachine: Using SSH client type: native
	I1229 07:55:31.821080  932489 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I1229 07:55:31.821107  932489 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-822299' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-822299/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-822299' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:55:31.973291  932489 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:55:31.973318  932489 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:55:31.973343  932489 ubuntu.go:190] setting up certificates
	I1229 07:55:31.973351  932489 provision.go:84] configureAuth start
	I1229 07:55:31.973410  932489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-822299
	I1229 07:55:31.990838  932489 provision.go:143] copyHostCerts
	I1229 07:55:31.990906  932489 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:55:31.990927  932489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:55:31.991004  932489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:55:31.991120  932489 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:55:31.991137  932489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:55:31.991165  932489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:55:31.991230  932489 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:55:31.991242  932489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:55:31.991268  932489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:55:31.991329  932489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.no-preload-822299 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-822299]
	I1229 07:55:32.311372  932489 provision.go:177] copyRemoteCerts
	I1229 07:55:32.311449  932489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:55:32.311493  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:32.328693  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:32.432506  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:55:32.450696  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:55:32.468314  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:55:32.486136  932489 provision.go:87] duration metric: took 512.762807ms to configureAuth
	I1229 07:55:32.486165  932489 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:55:32.486362  932489 config.go:182] Loaded profile config "no-preload-822299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:55:32.486461  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:32.503445  932489 main.go:144] libmachine: Using SSH client type: native
	I1229 07:55:32.503758  932489 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I1229 07:55:32.503778  932489 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:55:32.864494  932489 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:55:32.864518  932489 machine.go:97] duration metric: took 4.437133718s to provisionDockerMachine
	I1229 07:55:32.864530  932489 start.go:293] postStartSetup for "no-preload-822299" (driver="docker")
	I1229 07:55:32.864567  932489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:55:32.864652  932489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:55:32.864724  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:32.883345  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:32.990221  932489 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:55:32.994159  932489 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:55:32.994193  932489 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:55:32.994206  932489 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:55:32.994267  932489 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:55:32.994373  932489 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:55:32.994486  932489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:55:33.003838  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:55:33.024896  932489 start.go:296] duration metric: took 160.348912ms for postStartSetup
	I1229 07:55:33.024996  932489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:55:33.025042  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:33.049184  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:33.157946  932489 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:55:33.162793  932489 fix.go:56] duration metric: took 5.060574933s for fixHost
	I1229 07:55:33.162822  932489 start.go:83] releasing machines lock for "no-preload-822299", held for 5.060642305s
	I1229 07:55:33.162899  932489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-822299
	I1229 07:55:33.180540  932489 ssh_runner.go:195] Run: cat /version.json
	I1229 07:55:33.180601  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:33.180602  932489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:55:33.180668  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:33.198614  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:33.199831  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:33.407543  932489 ssh_runner.go:195] Run: systemctl --version
	I1229 07:55:33.414366  932489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:55:33.449923  932489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:55:33.454215  932489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:55:33.454329  932489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:55:33.462155  932489 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:55:33.462192  932489 start.go:496] detecting cgroup driver to use...
	I1229 07:55:33.462225  932489 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:55:33.462302  932489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:55:33.477171  932489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:55:33.490444  932489 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:55:33.490540  932489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:55:33.506184  932489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:55:33.519876  932489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:55:33.641795  932489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:55:33.785480  932489 docker.go:234] disabling docker service ...
	I1229 07:55:33.785557  932489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:55:33.803224  932489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:55:33.818096  932489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:55:33.937249  932489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:55:34.062445  932489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:55:34.076130  932489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:55:34.092111  932489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:55:34.092230  932489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.102217  932489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:55:34.102304  932489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.111496  932489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.120471  932489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.129361  932489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:55:34.137796  932489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.147212  932489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.155442  932489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.164141  932489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:55:34.171585  932489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:55:34.178884  932489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:55:34.300313  932489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:55:34.480889  932489 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:55:34.480958  932489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:55:34.485592  932489 start.go:574] Will wait 60s for crictl version
	I1229 07:55:34.485679  932489 ssh_runner.go:195] Run: which crictl
	I1229 07:55:34.490004  932489 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:55:34.520933  932489 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:55:34.521052  932489 ssh_runner.go:195] Run: crio --version
	I1229 07:55:34.551541  932489 ssh_runner.go:195] Run: crio --version
	I1229 07:55:34.581779  932489 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:55:34.584508  932489 cli_runner.go:164] Run: docker network inspect no-preload-822299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:55:34.600623  932489 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:55:34.604518  932489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:55:34.614283  932489 kubeadm.go:884] updating cluster {Name:no-preload-822299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:55:34.614402  932489 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:55:34.614443  932489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:55:34.659249  932489 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:55:34.659275  932489 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:55:34.659284  932489 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1229 07:55:34.659381  932489 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-822299 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:55:34.659464  932489 ssh_runner.go:195] Run: crio config
	I1229 07:55:34.732169  932489 cni.go:84] Creating CNI manager for ""
	I1229 07:55:34.732192  932489 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:55:34.732212  932489 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:55:34.732234  932489 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-822299 NodeName:no-preload-822299 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:55:34.732358  932489 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-822299"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:55:34.732428  932489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:55:34.741505  932489 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:55:34.741609  932489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:55:34.749445  932489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1229 07:55:34.762645  932489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:55:34.775749  932489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1229 07:55:34.788277  932489 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:55:34.791810  932489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:55:34.801421  932489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:55:34.919342  932489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:55:34.936444  932489 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299 for IP: 192.168.85.2
	I1229 07:55:34.936518  932489 certs.go:195] generating shared ca certs ...
	I1229 07:55:34.936569  932489 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:55:34.936764  932489 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:55:34.936937  932489 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:55:34.936975  932489 certs.go:257] generating profile certs ...
	I1229 07:55:34.937108  932489 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.key
	I1229 07:55:34.937179  932489 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.key.b9b3cced
	I1229 07:55:34.937233  932489 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.key
	I1229 07:55:34.937361  932489 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:55:34.937399  932489 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:55:34.937412  932489 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:55:34.937452  932489 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:55:34.937484  932489 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:55:34.937517  932489 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:55:34.937567  932489 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:55:34.938174  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:55:34.958972  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:55:34.979787  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:55:35.003439  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:55:35.029255  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:55:35.059784  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:55:35.086174  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:55:35.107937  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:55:35.127602  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:55:35.155177  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:55:35.179816  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:55:35.208444  932489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:55:35.223967  932489 ssh_runner.go:195] Run: openssl version
	I1229 07:55:35.231200  932489 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:55:35.242913  932489 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:55:35.251822  932489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:55:35.256510  932489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:55:35.256641  932489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:55:35.301821  932489 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:55:35.310218  932489 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:55:35.317555  932489 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:55:35.324707  932489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:55:35.328546  932489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:55:35.328615  932489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:55:35.369064  932489 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:55:35.376588  932489 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:55:35.385258  932489 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:55:35.392693  932489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:55:35.396545  932489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:55:35.396675  932489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:55:35.438009  932489 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:55:35.445486  932489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:55:35.449097  932489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:55:35.490117  932489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:55:35.535089  932489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:55:35.588117  932489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:55:35.645389  932489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:55:35.729704  932489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:55:35.809860  932489 kubeadm.go:401] StartCluster: {Name:no-preload-822299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:55:35.809957  932489 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:55:35.810055  932489 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:55:35.878184  932489 cri.go:96] found id: "6032c154ef8167005227b5440b39d68c2447ec5f8c0c91fd788340c8ee16c989"
	I1229 07:55:35.878207  932489 cri.go:96] found id: "41a7706d81e9c15097d3a00866892e44a875935df4795885b8436c4c59bb8fa8"
	I1229 07:55:35.878212  932489 cri.go:96] found id: "0089246ff4de942bd19f5b3b44618b8533216e0a07a0df9f128c470bc1cfe64d"
	I1229 07:55:35.878216  932489 cri.go:96] found id: "b07cb6606d4da3395c3f8cff43125cc60ee39b88d0e4ebd07629beb28ba19a59"
	I1229 07:55:35.878219  932489 cri.go:96] found id: ""
	I1229 07:55:35.878290  932489 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:55:35.893188  932489 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:55:35Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:55:35.893301  932489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:55:35.902959  932489 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:55:35.902979  932489 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:55:35.903048  932489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:55:35.913281  932489 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:55:35.913760  932489 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-822299" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:55:35.913909  932489 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-739997/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-822299" cluster setting kubeconfig missing "no-preload-822299" context setting]
	I1229 07:55:35.914222  932489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:55:35.915796  932489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:55:35.924616  932489 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1229 07:55:35.924649  932489 kubeadm.go:602] duration metric: took 21.664668ms to restartPrimaryControlPlane
	I1229 07:55:35.924679  932489 kubeadm.go:403] duration metric: took 114.830263ms to StartCluster
	I1229 07:55:35.924701  932489 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:55:35.924774  932489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:55:35.925406  932489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:55:35.925647  932489 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:55:35.926044  932489 config.go:182] Loaded profile config "no-preload-822299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:55:35.926044  932489 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:55:35.926123  932489 addons.go:70] Setting storage-provisioner=true in profile "no-preload-822299"
	I1229 07:55:35.926147  932489 addons.go:239] Setting addon storage-provisioner=true in "no-preload-822299"
	W1229 07:55:35.926153  932489 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:55:35.926174  932489 addons.go:70] Setting dashboard=true in profile "no-preload-822299"
	I1229 07:55:35.926188  932489 addons.go:239] Setting addon dashboard=true in "no-preload-822299"
	I1229 07:55:35.926182  932489 addons.go:70] Setting default-storageclass=true in profile "no-preload-822299"
	W1229 07:55:35.926194  932489 addons.go:248] addon dashboard should already be in state true
	I1229 07:55:35.926202  932489 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-822299"
	I1229 07:55:35.926214  932489 host.go:66] Checking if "no-preload-822299" exists ...
	I1229 07:55:35.926496  932489 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:55:35.926670  932489 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:55:35.926176  932489 host.go:66] Checking if "no-preload-822299" exists ...
	I1229 07:55:35.928334  932489 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:55:35.932931  932489 out.go:179] * Verifying Kubernetes components...
	I1229 07:55:35.936162  932489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:55:35.984877  932489 addons.go:239] Setting addon default-storageclass=true in "no-preload-822299"
	W1229 07:55:35.984899  932489 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:55:35.984922  932489 host.go:66] Checking if "no-preload-822299" exists ...
	I1229 07:55:35.985359  932489 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:55:35.994423  932489 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:55:35.994550  932489 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:55:35.998407  932489 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1229 07:55:35.998515  932489 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:55:35.998526  932489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:55:35.998592  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:36.001316  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:55:36.001349  932489 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:55:36.001433  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:36.028998  932489 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:55:36.029022  932489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:55:36.029086  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:36.054442  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:36.069186  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:36.078891  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:36.271345  932489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:55:36.296440  932489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:55:36.297048  932489 node_ready.go:35] waiting up to 6m0s for node "no-preload-822299" to be "Ready" ...
	I1229 07:55:36.338245  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:55:36.338312  932489 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:55:36.366491  932489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:55:36.385061  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:55:36.385130  932489 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:55:36.437192  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:55:36.437262  932489 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:55:36.498075  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:55:36.498142  932489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:55:36.534266  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:55:36.534345  932489 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1229 07:55:36.561218  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:55:36.561293  932489 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:55:36.589935  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:55:36.590011  932489 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:55:36.617829  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:55:36.617902  932489 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:55:36.638864  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:55:36.638941  932489 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:55:36.661692  932489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:55:39.152777  932489 node_ready.go:49] node "no-preload-822299" is "Ready"
	I1229 07:55:39.152849  932489 node_ready.go:38] duration metric: took 2.85577505s for node "no-preload-822299" to be "Ready" ...
	I1229 07:55:39.152864  932489 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:55:39.152924  932489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:55:39.336497  932489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.039970658s)
	I1229 07:55:40.322844  932489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.956276701s)
	I1229 07:55:40.322983  932489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.661189936s)
	I1229 07:55:40.323074  932489 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.170134431s)
	I1229 07:55:40.323132  932489 api_server.go:72] duration metric: took 4.397445054s to wait for apiserver process to appear ...
	I1229 07:55:40.323144  932489 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:55:40.323161  932489 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:55:40.326123  932489 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-822299 addons enable metrics-server
	
	I1229 07:55:40.329091  932489 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1229 07:55:40.332020  932489 addons.go:530] duration metric: took 4.405978969s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1229 07:55:40.338690  932489 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:55:40.338715  932489 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:55:40.823279  932489 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:55:40.831438  932489 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1229 07:55:40.832620  932489 api_server.go:141] control plane version: v1.35.0
	I1229 07:55:40.832647  932489 api_server.go:131] duration metric: took 509.496144ms to wait for apiserver health ...
	I1229 07:55:40.832658  932489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:55:40.835928  932489 system_pods.go:59] 8 kube-system pods found
	I1229 07:55:40.835969  932489 system_pods.go:61] "coredns-7d764666f9-9pqjr" [da4ef017-c306-45e2-b667-f47b190319ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:55:40.835979  932489 system_pods.go:61] "etcd-no-preload-822299" [e18f03c4-83a6-439f-a825-cdf568bbb3fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:55:40.836015  932489 system_pods.go:61] "kindnet-mtpz6" [47b6fe60-126e-4d25-aad6-72f6949ff1b9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:55:40.836030  932489 system_pods.go:61] "kube-apiserver-no-preload-822299" [1e438adc-8bba-4b21-bf75-c35b34fb442f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:55:40.836038  932489 system_pods.go:61] "kube-controller-manager-no-preload-822299" [fdd6fefb-1f32-4ff1-b1c7-18c3254a0cf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:55:40.836049  932489 system_pods.go:61] "kube-proxy-j7ktq" [2a4725d6-5c8f-414a-bbcb-b2313e27dcf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:55:40.836057  932489 system_pods.go:61] "kube-scheduler-no-preload-822299" [3ea6cf1a-d03f-4c5e-8ae3-efe59307782a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:55:40.836090  932489 system_pods.go:61] "storage-provisioner" [ac000c47-e858-4827-9d8d-b6246442cde8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:55:40.836103  932489 system_pods.go:74] duration metric: took 3.438883ms to wait for pod list to return data ...
	I1229 07:55:40.836124  932489 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:55:40.838609  932489 default_sa.go:45] found service account: "default"
	I1229 07:55:40.838634  932489 default_sa.go:55] duration metric: took 2.498724ms for default service account to be created ...
	I1229 07:55:40.838643  932489 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:55:40.841608  932489 system_pods.go:86] 8 kube-system pods found
	I1229 07:55:40.841643  932489 system_pods.go:89] "coredns-7d764666f9-9pqjr" [da4ef017-c306-45e2-b667-f47b190319ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:55:40.841661  932489 system_pods.go:89] "etcd-no-preload-822299" [e18f03c4-83a6-439f-a825-cdf568bbb3fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:55:40.841674  932489 system_pods.go:89] "kindnet-mtpz6" [47b6fe60-126e-4d25-aad6-72f6949ff1b9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:55:40.841685  932489 system_pods.go:89] "kube-apiserver-no-preload-822299" [1e438adc-8bba-4b21-bf75-c35b34fb442f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:55:40.841697  932489 system_pods.go:89] "kube-controller-manager-no-preload-822299" [fdd6fefb-1f32-4ff1-b1c7-18c3254a0cf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:55:40.841711  932489 system_pods.go:89] "kube-proxy-j7ktq" [2a4725d6-5c8f-414a-bbcb-b2313e27dcf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:55:40.841719  932489 system_pods.go:89] "kube-scheduler-no-preload-822299" [3ea6cf1a-d03f-4c5e-8ae3-efe59307782a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:55:40.841727  932489 system_pods.go:89] "storage-provisioner" [ac000c47-e858-4827-9d8d-b6246442cde8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:55:40.841740  932489 system_pods.go:126] duration metric: took 3.091164ms to wait for k8s-apps to be running ...
	I1229 07:55:40.841749  932489 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:55:40.841808  932489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:55:40.856400  932489 system_svc.go:56] duration metric: took 14.641295ms WaitForService to wait for kubelet
	I1229 07:55:40.856475  932489 kubeadm.go:587] duration metric: took 4.930785279s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:55:40.856508  932489 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:55:40.859706  932489 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:55:40.859784  932489 node_conditions.go:123] node cpu capacity is 2
	I1229 07:55:40.859811  932489 node_conditions.go:105] duration metric: took 3.279579ms to run NodePressure ...
	I1229 07:55:40.859839  932489 start.go:242] waiting for startup goroutines ...
	I1229 07:55:40.859883  932489 start.go:247] waiting for cluster config update ...
	I1229 07:55:40.859907  932489 start.go:256] writing updated cluster config ...
	I1229 07:55:40.860229  932489 ssh_runner.go:195] Run: rm -f paused
	I1229 07:55:40.864137  932489 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:55:40.868557  932489 pod_ready.go:83] waiting for pod "coredns-7d764666f9-9pqjr" in "kube-system" namespace to be "Ready" or be gone ...
	W1229 07:55:42.880390  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:45.374281  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:47.374900  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:49.883830  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:52.374455  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:54.875300  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:57.373808  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:59.374438  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:56:01.374583  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:56:03.874010  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:56:06.374271  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:56:08.873819  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:56:10.873858  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:56:12.874597  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	I1229 07:56:13.374904  932489 pod_ready.go:94] pod "coredns-7d764666f9-9pqjr" is "Ready"
	I1229 07:56:13.374934  932489 pod_ready.go:86] duration metric: took 32.506304502s for pod "coredns-7d764666f9-9pqjr" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.377855  932489 pod_ready.go:83] waiting for pod "etcd-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.382546  932489 pod_ready.go:94] pod "etcd-no-preload-822299" is "Ready"
	I1229 07:56:13.382574  932489 pod_ready.go:86] duration metric: took 4.691872ms for pod "etcd-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.385125  932489 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.390218  932489 pod_ready.go:94] pod "kube-apiserver-no-preload-822299" is "Ready"
	I1229 07:56:13.390257  932489 pod_ready.go:86] duration metric: took 5.10918ms for pod "kube-apiserver-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.392573  932489 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.572977  932489 pod_ready.go:94] pod "kube-controller-manager-no-preload-822299" is "Ready"
	I1229 07:56:13.573005  932489 pod_ready.go:86] duration metric: took 180.399648ms for pod "kube-controller-manager-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.772996  932489 pod_ready.go:83] waiting for pod "kube-proxy-j7ktq" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:14.174113  932489 pod_ready.go:94] pod "kube-proxy-j7ktq" is "Ready"
	I1229 07:56:14.174185  932489 pod_ready.go:86] duration metric: took 401.158316ms for pod "kube-proxy-j7ktq" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:14.373777  932489 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:14.772843  932489 pod_ready.go:94] pod "kube-scheduler-no-preload-822299" is "Ready"
	I1229 07:56:14.772873  932489 pod_ready.go:86] duration metric: took 399.068129ms for pod "kube-scheduler-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:14.772886  932489 pod_ready.go:40] duration metric: took 33.908671326s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:56:14.831165  932489 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1229 07:56:14.834226  932489 out.go:203] 
	W1229 07:56:14.837062  932489 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1229 07:56:14.840042  932489 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:56:14.842947  932489 out.go:179] * Done! kubectl is now configured to use "no-preload-822299" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 29 07:56:11 no-preload-822299 crio[664]: time="2025-12-29T07:56:11.293996986Z" level=info msg="Created container 1ac564eb3b678f5d18e79787bddd400f6078af4d4aa2ee610fd521cfc8c5ae9f: kube-system/storage-provisioner/storage-provisioner" id=e3d31caf-ad04-43e6-8e45-a57c6d1f8717 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:56:11 no-preload-822299 crio[664]: time="2025-12-29T07:56:11.295077322Z" level=info msg="Starting container: 1ac564eb3b678f5d18e79787bddd400f6078af4d4aa2ee610fd521cfc8c5ae9f" id=bf51766b-7f4e-48f6-af24-af2d9582f057 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:56:11 no-preload-822299 crio[664]: time="2025-12-29T07:56:11.298759456Z" level=info msg="Started container" PID=1684 containerID=1ac564eb3b678f5d18e79787bddd400f6078af4d4aa2ee610fd521cfc8c5ae9f description=kube-system/storage-provisioner/storage-provisioner id=bf51766b-7f4e-48f6-af24-af2d9582f057 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2a4c1545ee32dce5bcab90a458c70db246beaf244e92cebd95bdcc4097234cb4
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.776811113Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.776849012Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.782644564Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.78268033Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.787181383Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.787216435Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.787542788Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.793057337Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.793093415Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.086719696Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c8202fb2-fd56-4ffc-942b-9fd455bbe1dc name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.08798144Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=32e97173-acfa-4541-a7d2-ab95a5c1cc2b name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.089216911Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf/dashboard-metrics-scraper" id=3b6882bb-a73a-455e-a38c-672148bf4b0f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.089348736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.096612072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.097181028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.115099075Z" level=info msg="Created container 242d9128050d7dc895a326106af73fe7da1e4aad58427230a10b1e47ec73cfd9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf/dashboard-metrics-scraper" id=3b6882bb-a73a-455e-a38c-672148bf4b0f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.115910986Z" level=info msg="Starting container: 242d9128050d7dc895a326106af73fe7da1e4aad58427230a10b1e47ec73cfd9" id=f9b76773-5219-4d65-8524-1c3c688ae49a name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.118459Z" level=info msg="Started container" PID=1757 containerID=242d9128050d7dc895a326106af73fe7da1e4aad58427230a10b1e47ec73cfd9 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf/dashboard-metrics-scraper id=f9b76773-5219-4d65-8524-1c3c688ae49a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b7a5e83c6ddb881c8750a90c1082f639154a69abd0b7828a5f49b5edebc941ec
	Dec 29 07:56:23 no-preload-822299 conmon[1755]: conmon 242d9128050d7dc895a3 <ninfo>: container 1757 exited with status 1
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.291513376Z" level=info msg="Removing container: be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f" id=1213aabe-f73a-4bfb-87e2-67db4220b935 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.29946644Z" level=info msg="Error loading conmon cgroup of container be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f: cgroup deleted" id=1213aabe-f73a-4bfb-87e2-67db4220b935 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.302717293Z" level=info msg="Removed container be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf/dashboard-metrics-scraper" id=1213aabe-f73a-4bfb-87e2-67db4220b935 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	242d9128050d7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   b7a5e83c6ddb8       dashboard-metrics-scraper-867fb5f87b-msgsf   kubernetes-dashboard
	1ac564eb3b678       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           18 seconds ago      Running             storage-provisioner         2                   2a4c1545ee32d       storage-provisioner                          kube-system
	8b4305f000816       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago      Running             kubernetes-dashboard        0                   edb7d744b8aa5       kubernetes-dashboard-b84665fb8-w5gsg         kubernetes-dashboard
	58e48a40d8762       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   5e92efaee01f4       busybox                                      default
	207de0eff4ea6       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           49 seconds ago      Running             coredns                     1                   aaf81f5bc1730       coredns-7d764666f9-9pqjr                     kube-system
	027c0db3fe162       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           49 seconds ago      Exited              storage-provisioner         1                   2a4c1545ee32d       storage-provisioner                          kube-system
	8c7aaa2e263d6       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           49 seconds ago      Running             kube-proxy                  1                   1856b9a3521f5       kube-proxy-j7ktq                             kube-system
	4b778416881b6       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           49 seconds ago      Running             kindnet-cni                 1                   60564c4e90ddf       kindnet-mtpz6                                kube-system
	6032c154ef816       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           54 seconds ago      Running             etcd                        1                   0f3129b3fb9d2       etcd-no-preload-822299                       kube-system
	41a7706d81e9c       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           54 seconds ago      Running             kube-controller-manager     1                   3b315b6cd9c17       kube-controller-manager-no-preload-822299    kube-system
	0089246ff4de9       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           54 seconds ago      Running             kube-apiserver              1                   bdd95c369f8f3       kube-apiserver-no-preload-822299             kube-system
	b07cb6606d4da       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           54 seconds ago      Running             kube-scheduler              1                   e018df3acbbfd       kube-scheduler-no-preload-822299             kube-system
	
	
	==> coredns [207de0eff4ea65084cae90279e02da02d1295619882838059e4fbfcf26208f18] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:50471 - 58572 "HINFO IN 6307879131506065131.6569118893440573743. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011908091s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-822299
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-822299
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=no-preload-822299
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_54_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:54:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-822299
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:56:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:56:10 +0000   Mon, 29 Dec 2025 07:54:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:56:10 +0000   Mon, 29 Dec 2025 07:54:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:56:10 +0000   Mon, 29 Dec 2025 07:54:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:56:10 +0000   Mon, 29 Dec 2025 07:55:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-822299
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                19ace3de-03e9-4b9a-9295-4970de68f6fd
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-7d764666f9-9pqjr                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-no-preload-822299                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         110s
	  kube-system                 kindnet-mtpz6                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-no-preload-822299              250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-no-preload-822299     200m (10%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-j7ktq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-no-preload-822299              100m (5%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-msgsf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-w5gsg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  105s  node-controller  Node no-preload-822299 event: Registered Node no-preload-822299 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node no-preload-822299 event: Registered Node no-preload-822299 in Controller
	
	
	==> dmesg <==
	[Dec29 07:26] overlayfs: idmapped layers are currently not supported
	[Dec29 07:29] overlayfs: idmapped layers are currently not supported
	[Dec29 07:30] overlayfs: idmapped layers are currently not supported
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	[Dec29 07:54] overlayfs: idmapped layers are currently not supported
	[Dec29 07:55] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6032c154ef8167005227b5440b39d68c2447ec5f8c0c91fd788340c8ee16c989] <==
	{"level":"info","ts":"2025-12-29T07:55:35.879619Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:55:35.879738Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:55:35.888651Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-29T07:55:35.893382Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:55:35.893554Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:55:35.896921Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:55:35.896950Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:55:36.720832Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:55:36.722903Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:55:36.723000Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:55:36.723059Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:55:36.723100Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:55:36.724830Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:55:36.724901Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:55:36.724943Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:55:36.724978Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:55:36.728147Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-822299 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:55:36.728221Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:55:36.728285Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:55:36.729359Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:55:36.731161Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-29T07:55:36.735562Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:55:36.740924Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:55:36.736798Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:55:36.741796Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 07:56:30 up  4:39,  0 user,  load average: 1.32, 1.78, 2.23
	Linux no-preload-822299 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4b778416881b60e07179255537ddb5371244f603b8bc5d645105c9f71226af14] <==
	I1229 07:55:40.564656       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:55:40.655819       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:55:40.656015       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:55:40.656062       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:55:40.656116       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:55:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:55:40.764723       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:55:40.764752       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:55:40.764769       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:55:40.765932       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1229 07:56:10.764928       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1229 07:56:10.766057       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1229 07:56:10.766113       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1229 07:56:10.766196       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1229 07:56:11.965933       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:56:11.965965       1 metrics.go:72] Registering metrics
	I1229 07:56:11.966040       1 controller.go:711] "Syncing nftables rules"
	I1229 07:56:20.770707       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:56:20.770746       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0089246ff4de942bd19f5b3b44618b8533216e0a07a0df9f128c470bc1cfe64d] <==
	I1229 07:55:39.361759       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:55:39.361744       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:39.362287       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:55:39.365433       1 policy_source.go:248] refreshing policies
	I1229 07:55:39.362367       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1229 07:55:39.361769       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:39.380016       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1229 07:55:39.386131       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:55:39.386407       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:55:39.386739       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:55:39.386796       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:55:39.386828       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:55:39.398348       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1229 07:55:39.411110       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:55:39.783516       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:55:39.821899       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:55:39.849259       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:55:39.865324       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:55:39.866926       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:55:39.883729       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:55:39.962020       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.228.189"}
	I1229 07:55:39.989407       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.142.107"}
	I1229 07:55:42.671226       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:55:42.871049       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:55:42.921292       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [41a7706d81e9c15097d3a00866892e44a875935df4795885b8436c4c59bb8fa8] <==
	I1229 07:55:42.294735       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.294758       1 range_allocator.go:177] "Sending events to api server"
	I1229 07:55:42.294791       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1229 07:55:42.294801       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:55:42.294805       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.294889       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:55:42.294927       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.295072       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.295079       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.295137       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.295112       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.296480       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.295121       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.298945       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.294917       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.295128       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.299609       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.349257       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.395390       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.397589       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.397704       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:55:42.397736       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:55:42.875160       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1229 07:55:42.881175       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1229 07:55:42.881265       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [8c7aaa2e263d66f365a99d3972aed68fc1fca62757f4eb3b373fe85d0cbf3f11] <==
	I1229 07:55:40.651411       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:55:40.750377       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:55:40.850989       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:40.851074       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1229 07:55:40.851167       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:55:40.887486       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:55:40.887541       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:55:40.891024       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:55:40.891316       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:55:40.891342       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:55:40.895219       1 config.go:200] "Starting service config controller"
	I1229 07:55:40.895735       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:55:40.896238       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:55:40.896254       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:55:40.896268       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:55:40.896272       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:55:40.900030       1 config.go:309] "Starting node config controller"
	I1229 07:55:40.900125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:55:40.900157       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:55:40.996928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:55:40.996961       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:55:40.996929       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b07cb6606d4da3395c3f8cff43125cc60ee39b88d0e4ebd07629beb28ba19a59] <==
	I1229 07:55:37.524808       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:55:39.005518       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:55:39.005556       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:55:39.005567       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:55:39.005575       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:55:39.243260       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:55:39.243289       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:55:39.254080       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:55:39.254109       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:55:39.259055       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:55:39.259106       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:55:39.455158       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:55:50 no-preload-822299 kubelet[785]: E1229 07:55:50.194006     785 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-822299" containerName="kube-apiserver"
	Dec 29 07:55:52 no-preload-822299 kubelet[785]: E1229 07:55:52.199528     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-w5gsg" containerName="kubernetes-dashboard"
	Dec 29 07:55:52 no-preload-822299 kubelet[785]: I1229 07:55:52.214559     785 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-w5gsg" podStartSLOduration=1.775375251 podStartE2EDuration="10.214545031s" podCreationTimestamp="2025-12-29 07:55:42 +0000 UTC" firstStartedPulling="2025-12-29 07:55:43.165588578 +0000 UTC m=+8.226622191" lastFinishedPulling="2025-12-29 07:55:51.604758367 +0000 UTC m=+16.665791971" observedRunningTime="2025-12-29 07:55:52.213379559 +0000 UTC m=+17.274413164" watchObservedRunningTime="2025-12-29 07:55:52.214545031 +0000 UTC m=+17.275578636"
	Dec 29 07:55:53 no-preload-822299 kubelet[785]: E1229 07:55:53.201360     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-w5gsg" containerName="kubernetes-dashboard"
	Dec 29 07:55:54 no-preload-822299 kubelet[785]: E1229 07:55:54.294017     785 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-822299" containerName="kube-controller-manager"
	Dec 29 07:55:58 no-preload-822299 kubelet[785]: E1229 07:55:58.085494     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" containerName="dashboard-metrics-scraper"
	Dec 29 07:55:58 no-preload-822299 kubelet[785]: I1229 07:55:58.085548     785 scope.go:122] "RemoveContainer" containerID="9433232180bc4a8bf1784a4e6bded6091abc98b786b7f8cc8df05f2ee732dd98"
	Dec 29 07:55:58 no-preload-822299 kubelet[785]: I1229 07:55:58.215163     785 scope.go:122] "RemoveContainer" containerID="9433232180bc4a8bf1784a4e6bded6091abc98b786b7f8cc8df05f2ee732dd98"
	Dec 29 07:55:58 no-preload-822299 kubelet[785]: E1229 07:55:58.215701     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" containerName="dashboard-metrics-scraper"
	Dec 29 07:55:58 no-preload-822299 kubelet[785]: I1229 07:55:58.215811     785 scope.go:122] "RemoveContainer" containerID="be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f"
	Dec 29 07:55:58 no-preload-822299 kubelet[785]: E1229 07:55:58.216101     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-msgsf_kubernetes-dashboard(fe6e803c-6769-42f9-a71e-999b6cfd8085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" podUID="fe6e803c-6769-42f9-a71e-999b6cfd8085"
	Dec 29 07:55:59 no-preload-822299 kubelet[785]: E1229 07:55:59.806781     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" containerName="dashboard-metrics-scraper"
	Dec 29 07:55:59 no-preload-822299 kubelet[785]: I1229 07:55:59.807335     785 scope.go:122] "RemoveContainer" containerID="be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f"
	Dec 29 07:55:59 no-preload-822299 kubelet[785]: E1229 07:55:59.807586     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-msgsf_kubernetes-dashboard(fe6e803c-6769-42f9-a71e-999b6cfd8085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" podUID="fe6e803c-6769-42f9-a71e-999b6cfd8085"
	Dec 29 07:56:11 no-preload-822299 kubelet[785]: I1229 07:56:11.257642     785 scope.go:122] "RemoveContainer" containerID="027c0db3fe16212396a84ea9fbe3ce1fcded7ccde2a4c9332868ffe62e4474ed"
	Dec 29 07:56:13 no-preload-822299 kubelet[785]: E1229 07:56:13.154778     785 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9pqjr" containerName="coredns"
	Dec 29 07:56:23 no-preload-822299 kubelet[785]: E1229 07:56:23.086243     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" containerName="dashboard-metrics-scraper"
	Dec 29 07:56:23 no-preload-822299 kubelet[785]: I1229 07:56:23.086284     785 scope.go:122] "RemoveContainer" containerID="be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f"
	Dec 29 07:56:23 no-preload-822299 kubelet[785]: I1229 07:56:23.289248     785 scope.go:122] "RemoveContainer" containerID="be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f"
	Dec 29 07:56:23 no-preload-822299 kubelet[785]: E1229 07:56:23.289690     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" containerName="dashboard-metrics-scraper"
	Dec 29 07:56:23 no-preload-822299 kubelet[785]: I1229 07:56:23.290532     785 scope.go:122] "RemoveContainer" containerID="242d9128050d7dc895a326106af73fe7da1e4aad58427230a10b1e47ec73cfd9"
	Dec 29 07:56:23 no-preload-822299 kubelet[785]: E1229 07:56:23.290827     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-msgsf_kubernetes-dashboard(fe6e803c-6769-42f9-a71e-999b6cfd8085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" podUID="fe6e803c-6769-42f9-a71e-999b6cfd8085"
	Dec 29 07:56:27 no-preload-822299 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:56:27 no-preload-822299 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:56:27 no-preload-822299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [8b4305f000816e5647e84ad1081d4c1e587f0138a84b81bb87dd316602494d5c] <==
	2025/12/29 07:55:51 Starting overwatch
	2025/12/29 07:55:51 Using namespace: kubernetes-dashboard
	2025/12/29 07:55:51 Using in-cluster config to connect to apiserver
	2025/12/29 07:55:51 Using secret token for csrf signing
	2025/12/29 07:55:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:55:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:55:51 Successful initial request to the apiserver, version: v1.35.0
	2025/12/29 07:55:51 Generating JWE encryption key
	2025/12/29 07:55:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:55:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:55:51 Initializing JWE encryption key from synchronized object
	2025/12/29 07:55:51 Creating in-cluster Sidecar client
	2025/12/29 07:55:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:55:51 Serving insecurely on HTTP port: 9090
	2025/12/29 07:56:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [027c0db3fe16212396a84ea9fbe3ce1fcded7ccde2a4c9332868ffe62e4474ed] <==
	I1229 07:55:40.640185       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:56:10.642572       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [1ac564eb3b678f5d18e79787bddd400f6078af4d4aa2ee610fd521cfc8c5ae9f] <==
	I1229 07:56:11.309475       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:56:11.322132       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:56:11.322200       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:56:11.324413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:14.780772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:19.041730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:22.640513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:25.694472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:28.717040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:28.724911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:56:28.725102       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:56:28.725483       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40fb3eac-348b-44c6-ae27-7f5b53dd11a0", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-822299_1f3396cc-24ef-4c60-adfd-abd09ea7a20d became leader
	I1229 07:56:28.725566       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-822299_1f3396cc-24ef-4c60-adfd-abd09ea7a20d!
	W1229 07:56:28.728548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:28.739418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:56:28.826495       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-822299_1f3396cc-24ef-4c60-adfd-abd09ea7a20d!
	W1229 07:56:30.745383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:30.752383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-822299 -n no-preload-822299
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-822299 -n no-preload-822299: exit status 2 (382.323607ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-822299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-822299
helpers_test.go:244: (dbg) docker inspect no-preload-822299:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70",
	        "Created": "2025-12-29T07:54:12.225052256Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 932616,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:55:28.157173327Z",
	            "FinishedAt": "2025-12-29T07:55:27.357359899Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/hostname",
	        "HostsPath": "/var/lib/docker/containers/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/hosts",
	        "LogPath": "/var/lib/docker/containers/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70/199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70-json.log",
	        "Name": "/no-preload-822299",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-822299:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-822299",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "199b371fef132efde10af77a3b0cf861bf8ef1538d43c2334de40b18ae5def70",
	                "LowerDir": "/var/lib/docker/overlay2/f9c14dc7b3402416cae82c68b5d58f6bb280c659b388cc078b450f2d2e938cb3-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9c14dc7b3402416cae82c68b5d58f6bb280c659b388cc078b450f2d2e938cb3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9c14dc7b3402416cae82c68b5d58f6bb280c659b388cc078b450f2d2e938cb3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9c14dc7b3402416cae82c68b5d58f6bb280c659b388cc078b450f2d2e938cb3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-822299",
	                "Source": "/var/lib/docker/volumes/no-preload-822299/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-822299",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-822299",
	                "name.minikube.sigs.k8s.io": "no-preload-822299",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6420e632940d16d15f91e4e24f596a1c7c12da8e37d3a8b47a52e6be3b02b051",
	            "SandboxKey": "/var/run/docker/netns/6420e632940d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-822299": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:60:2a:94:cb:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9fa6b3c21ae85fa8b96c74beecaa8ac1deb81e91b1a2aaf65c5b20de7bb77be6",
	                    "EndpointID": "8bd798ff9ffc62daf17c43e864da6011cce17ea5daacc261af1869bd7af75e76",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-822299",
	                        "199b371fef13"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-822299 -n no-preload-822299
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-822299 -n no-preload-822299: exit status 2 (353.345487ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-822299 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-822299 logs -n 25: (1.312564419s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:45 UTC │ 29 Dec 25 07:46 UTC │
	│ start   │ -p cert-expiration-853545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ delete  │ -p cert-expiration-853545                                                                                                                                                                                                                     │ cert-expiration-853545    │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │ 29 Dec 25 07:49 UTC │
	│ start   │ -p force-systemd-flag-122604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-122604 │ jenkins │ v1.37.0 │ 29 Dec 25 07:49 UTC │                     │
	│ delete  │ -p force-systemd-env-002562                                                                                                                                                                                                                   │ force-systemd-env-002562  │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p cert-options-600463 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ cert-options-600463 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ -p cert-options-600463 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ delete  │ -p cert-options-600463                                                                                                                                                                                                                        │ cert-options-600463       │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-512867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │                     │
	│ stop    │ -p old-k8s-version-512867 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │ 29 Dec 25 07:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-512867 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ image   │ old-k8s-version-512867 image list --format=json                                                                                                                                                                                               │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ pause   │ -p old-k8s-version-512867 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │                     │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867    │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-822299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │                     │
	│ stop    │ -p no-preload-822299 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable dashboard -p no-preload-822299 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:56 UTC │
	│ image   │ no-preload-822299 image list --format=json                                                                                                                                                                                                    │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ pause   │ -p no-preload-822299 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-822299         │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:55:27
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:55:27.885161  932489 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:55:27.885288  932489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:55:27.885299  932489 out.go:374] Setting ErrFile to fd 2...
	I1229 07:55:27.885305  932489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:55:27.885657  932489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:55:27.886107  932489 out.go:368] Setting JSON to false
	I1229 07:55:27.887263  932489 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16680,"bootTime":1766978248,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:55:27.887351  932489 start.go:143] virtualization:  
	I1229 07:55:27.890790  932489 out.go:179] * [no-preload-822299] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:55:27.894610  932489 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:55:27.894709  932489 notify.go:221] Checking for updates...
	I1229 07:55:27.900561  932489 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:55:27.903485  932489 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:55:27.906285  932489 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:55:27.909027  932489 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:55:27.911861  932489 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:55:27.915098  932489 config.go:182] Loaded profile config "no-preload-822299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:55:27.915699  932489 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:55:27.942032  932489 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:55:27.942154  932489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:55:28.006165  932489 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:55:27.994615732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:55:28.006312  932489 docker.go:319] overlay module found
	I1229 07:55:28.011541  932489 out.go:179] * Using the docker driver based on existing profile
	I1229 07:55:28.014545  932489 start.go:309] selected driver: docker
	I1229 07:55:28.014574  932489 start.go:928] validating driver "docker" against &{Name:no-preload-822299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:55:28.014689  932489 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:55:28.015433  932489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:55:28.069888  932489 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:55:28.060826981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:55:28.070216  932489 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:55:28.070256  932489 cni.go:84] Creating CNI manager for ""
	I1229 07:55:28.070327  932489 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:55:28.070371  932489 start.go:353] cluster config:
	{Name:no-preload-822299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:55:28.073618  932489 out.go:179] * Starting "no-preload-822299" primary control-plane node in "no-preload-822299" cluster
	I1229 07:55:28.076397  932489 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:55:28.079248  932489 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:55:28.082111  932489 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:55:28.082188  932489 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:55:28.082260  932489 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/config.json ...
	I1229 07:55:28.082530  932489 cache.go:107] acquiring lock: {Name:mk77910156d27401bb7909a31b93ec051f9aa764 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082617  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1229 07:55:28.082629  932489 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.293µs
	I1229 07:55:28.082643  932489 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1229 07:55:28.082656  932489 cache.go:107] acquiring lock: {Name:mk7acdd26b6f5f36232f222f1b8b250f3e55a098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082690  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1229 07:55:28.082700  932489 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 46.474µs
	I1229 07:55:28.082707  932489 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1229 07:55:28.082722  932489 cache.go:107] acquiring lock: {Name:mk6c222e06ab2a6eba07c71c6bbf60553a466256 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082754  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1229 07:55:28.082764  932489 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 43.315µs
	I1229 07:55:28.082770  932489 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1229 07:55:28.082780  932489 cache.go:107] acquiring lock: {Name:mkc4b5fee2fc8ad3886c6f422b410c403d3cd271 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082811  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1229 07:55:28.082819  932489 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 40.526µs
	I1229 07:55:28.082825  932489 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1229 07:55:28.082843  932489 cache.go:107] acquiring lock: {Name:mkfe408e55325d204df091c798ec530361ec90e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082873  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1229 07:55:28.082882  932489 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 48.337µs
	I1229 07:55:28.082888  932489 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1229 07:55:28.082897  932489 cache.go:107] acquiring lock: {Name:mk7e5915b6ec72206d698779b421c83aee716ab1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082926  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1229 07:55:28.082936  932489 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.877µs
	I1229 07:55:28.082942  932489 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1229 07:55:28.082958  932489 cache.go:107] acquiring lock: {Name:mk42d4be1fc1ab08338ed03f966f32cf4e137a60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.082994  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1229 07:55:28.083003  932489 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 47.557µs
	I1229 07:55:28.083009  932489 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1229 07:55:28.083027  932489 cache.go:107] acquiring lock: {Name:mkb59ef52ccfb9558cc0c53f2d458d37076da56a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.083057  932489 cache.go:115] /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1229 07:55:28.083066  932489 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 40.239µs
	I1229 07:55:28.083072  932489 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1229 07:55:28.083078  932489 cache.go:87] Successfully saved all images to host disk.
	I1229 07:55:28.102038  932489 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:55:28.102060  932489 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:55:28.102075  932489 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:55:28.102108  932489 start.go:360] acquireMachinesLock for no-preload-822299: {Name:mk8778cb7dc7747a72362114977db1c0dacca8bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:55:28.102167  932489 start.go:364] duration metric: took 38.097µs to acquireMachinesLock for "no-preload-822299"
	I1229 07:55:28.102191  932489 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:55:28.102201  932489 fix.go:54] fixHost starting: 
	I1229 07:55:28.102464  932489 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:55:28.119679  932489 fix.go:112] recreateIfNeeded on no-preload-822299: state=Stopped err=<nil>
	W1229 07:55:28.119707  932489 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:55:28.125046  932489 out.go:252] * Restarting existing docker container for "no-preload-822299" ...
	I1229 07:55:28.125139  932489 cli_runner.go:164] Run: docker start no-preload-822299
	I1229 07:55:28.378367  932489 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:55:28.401570  932489 kic.go:430] container "no-preload-822299" state is running.
	I1229 07:55:28.401982  932489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-822299
	I1229 07:55:28.427152  932489 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/config.json ...
	I1229 07:55:28.427374  932489 machine.go:94] provisionDockerMachine start ...
	I1229 07:55:28.427436  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:28.458424  932489 main.go:144] libmachine: Using SSH client type: native
	I1229 07:55:28.459006  932489 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I1229 07:55:28.459023  932489 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:55:28.459721  932489 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44854->127.0.0.1:33828: read: connection reset by peer
	I1229 07:55:31.620342  932489 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-822299
	
	I1229 07:55:31.620367  932489 ubuntu.go:182] provisioning hostname "no-preload-822299"
	I1229 07:55:31.620441  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:31.638176  932489 main.go:144] libmachine: Using SSH client type: native
	I1229 07:55:31.638493  932489 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I1229 07:55:31.638512  932489 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-822299 && echo "no-preload-822299" | sudo tee /etc/hostname
	I1229 07:55:31.802797  932489 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-822299
	
	I1229 07:55:31.802885  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:31.820688  932489 main.go:144] libmachine: Using SSH client type: native
	I1229 07:55:31.821080  932489 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I1229 07:55:31.821107  932489 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-822299' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-822299/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-822299' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:55:31.973291  932489 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:55:31.973318  932489 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:55:31.973343  932489 ubuntu.go:190] setting up certificates
	I1229 07:55:31.973351  932489 provision.go:84] configureAuth start
	I1229 07:55:31.973410  932489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-822299
	I1229 07:55:31.990838  932489 provision.go:143] copyHostCerts
	I1229 07:55:31.990906  932489 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:55:31.990927  932489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:55:31.991004  932489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:55:31.991120  932489 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:55:31.991137  932489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:55:31.991165  932489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:55:31.991230  932489 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:55:31.991242  932489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:55:31.991268  932489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:55:31.991329  932489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.no-preload-822299 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-822299]
	I1229 07:55:32.311372  932489 provision.go:177] copyRemoteCerts
	I1229 07:55:32.311449  932489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:55:32.311493  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:32.328693  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:32.432506  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:55:32.450696  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:55:32.468314  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:55:32.486136  932489 provision.go:87] duration metric: took 512.762807ms to configureAuth
	I1229 07:55:32.486165  932489 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:55:32.486362  932489 config.go:182] Loaded profile config "no-preload-822299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:55:32.486461  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:32.503445  932489 main.go:144] libmachine: Using SSH client type: native
	I1229 07:55:32.503758  932489 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33828 <nil> <nil>}
	I1229 07:55:32.503778  932489 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:55:32.864494  932489 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:55:32.864518  932489 machine.go:97] duration metric: took 4.437133718s to provisionDockerMachine
	I1229 07:55:32.864530  932489 start.go:293] postStartSetup for "no-preload-822299" (driver="docker")
	I1229 07:55:32.864567  932489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:55:32.864652  932489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:55:32.864724  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:32.883345  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:32.990221  932489 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:55:32.994159  932489 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:55:32.994193  932489 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:55:32.994206  932489 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:55:32.994267  932489 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:55:32.994373  932489 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:55:32.994486  932489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:55:33.003838  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:55:33.024896  932489 start.go:296] duration metric: took 160.348912ms for postStartSetup
	I1229 07:55:33.024996  932489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:55:33.025042  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:33.049184  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:33.157946  932489 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:55:33.162793  932489 fix.go:56] duration metric: took 5.060574933s for fixHost
	I1229 07:55:33.162822  932489 start.go:83] releasing machines lock for "no-preload-822299", held for 5.060642305s
	I1229 07:55:33.162899  932489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-822299
	I1229 07:55:33.180540  932489 ssh_runner.go:195] Run: cat /version.json
	I1229 07:55:33.180601  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:33.180602  932489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:55:33.180668  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:33.198614  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:33.199831  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:33.407543  932489 ssh_runner.go:195] Run: systemctl --version
	I1229 07:55:33.414366  932489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:55:33.449923  932489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:55:33.454215  932489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:55:33.454329  932489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:55:33.462155  932489 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:55:33.462192  932489 start.go:496] detecting cgroup driver to use...
	I1229 07:55:33.462225  932489 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:55:33.462302  932489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:55:33.477171  932489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:55:33.490444  932489 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:55:33.490540  932489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:55:33.506184  932489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:55:33.519876  932489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:55:33.641795  932489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:55:33.785480  932489 docker.go:234] disabling docker service ...
	I1229 07:55:33.785557  932489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:55:33.803224  932489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:55:33.818096  932489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:55:33.937249  932489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:55:34.062445  932489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:55:34.076130  932489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:55:34.092111  932489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:55:34.092230  932489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.102217  932489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:55:34.102304  932489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.111496  932489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.120471  932489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.129361  932489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:55:34.137796  932489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.147212  932489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.155442  932489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:55:34.164141  932489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:55:34.171585  932489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:55:34.178884  932489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:55:34.300313  932489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:55:34.480889  932489 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:55:34.480958  932489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:55:34.485592  932489 start.go:574] Will wait 60s for crictl version
	I1229 07:55:34.485679  932489 ssh_runner.go:195] Run: which crictl
	I1229 07:55:34.490004  932489 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:55:34.520933  932489 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:55:34.521052  932489 ssh_runner.go:195] Run: crio --version
	I1229 07:55:34.551541  932489 ssh_runner.go:195] Run: crio --version
	I1229 07:55:34.581779  932489 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:55:34.584508  932489 cli_runner.go:164] Run: docker network inspect no-preload-822299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:55:34.600623  932489 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:55:34.604518  932489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:55:34.614283  932489 kubeadm.go:884] updating cluster {Name:no-preload-822299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:55:34.614402  932489 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:55:34.614443  932489 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:55:34.659249  932489 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:55:34.659275  932489 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:55:34.659284  932489 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1229 07:55:34.659381  932489 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-822299 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:55:34.659464  932489 ssh_runner.go:195] Run: crio config
	I1229 07:55:34.732169  932489 cni.go:84] Creating CNI manager for ""
	I1229 07:55:34.732192  932489 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:55:34.732212  932489 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:55:34.732234  932489 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-822299 NodeName:no-preload-822299 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:55:34.732358  932489 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-822299"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:55:34.732428  932489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:55:34.741505  932489 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:55:34.741609  932489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:55:34.749445  932489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1229 07:55:34.762645  932489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:55:34.775749  932489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1229 07:55:34.788277  932489 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:55:34.791810  932489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:55:34.801421  932489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:55:34.919342  932489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:55:34.936444  932489 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299 for IP: 192.168.85.2
	I1229 07:55:34.936518  932489 certs.go:195] generating shared ca certs ...
	I1229 07:55:34.936569  932489 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:55:34.936764  932489 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:55:34.936937  932489 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:55:34.936975  932489 certs.go:257] generating profile certs ...
	I1229 07:55:34.937108  932489 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.key
	I1229 07:55:34.937179  932489 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.key.b9b3cced
	I1229 07:55:34.937233  932489 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.key
	I1229 07:55:34.937361  932489 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:55:34.937399  932489 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:55:34.937412  932489 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:55:34.937452  932489 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:55:34.937484  932489 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:55:34.937517  932489 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:55:34.937567  932489 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:55:34.938174  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:55:34.958972  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:55:34.979787  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:55:35.003439  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:55:35.029255  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:55:35.059784  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:55:35.086174  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:55:35.107937  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:55:35.127602  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:55:35.155177  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:55:35.179816  932489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:55:35.208444  932489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:55:35.223967  932489 ssh_runner.go:195] Run: openssl version
	I1229 07:55:35.231200  932489 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:55:35.242913  932489 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:55:35.251822  932489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:55:35.256510  932489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:55:35.256641  932489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:55:35.301821  932489 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:55:35.310218  932489 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:55:35.317555  932489 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:55:35.324707  932489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:55:35.328546  932489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:55:35.328615  932489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:55:35.369064  932489 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:55:35.376588  932489 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:55:35.385258  932489 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:55:35.392693  932489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:55:35.396545  932489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:55:35.396675  932489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:55:35.438009  932489 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:55:35.445486  932489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:55:35.449097  932489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:55:35.490117  932489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:55:35.535089  932489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:55:35.588117  932489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:55:35.645389  932489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:55:35.729704  932489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:55:35.809860  932489 kubeadm.go:401] StartCluster: {Name:no-preload-822299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-822299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:55:35.809957  932489 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:55:35.810055  932489 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:55:35.878184  932489 cri.go:96] found id: "6032c154ef8167005227b5440b39d68c2447ec5f8c0c91fd788340c8ee16c989"
	I1229 07:55:35.878207  932489 cri.go:96] found id: "41a7706d81e9c15097d3a00866892e44a875935df4795885b8436c4c59bb8fa8"
	I1229 07:55:35.878212  932489 cri.go:96] found id: "0089246ff4de942bd19f5b3b44618b8533216e0a07a0df9f128c470bc1cfe64d"
	I1229 07:55:35.878216  932489 cri.go:96] found id: "b07cb6606d4da3395c3f8cff43125cc60ee39b88d0e4ebd07629beb28ba19a59"
	I1229 07:55:35.878219  932489 cri.go:96] found id: ""
	I1229 07:55:35.878290  932489 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:55:35.893188  932489 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:55:35Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:55:35.893301  932489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:55:35.902959  932489 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:55:35.902979  932489 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:55:35.903048  932489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:55:35.913281  932489 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:55:35.913760  932489 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-822299" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:55:35.913909  932489 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-739997/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-822299" cluster setting kubeconfig missing "no-preload-822299" context setting]
	I1229 07:55:35.914222  932489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:55:35.915796  932489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:55:35.924616  932489 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1229 07:55:35.924649  932489 kubeadm.go:602] duration metric: took 21.664668ms to restartPrimaryControlPlane
	I1229 07:55:35.924679  932489 kubeadm.go:403] duration metric: took 114.830263ms to StartCluster
	I1229 07:55:35.924701  932489 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:55:35.924774  932489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:55:35.925406  932489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:55:35.925647  932489 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:55:35.926044  932489 config.go:182] Loaded profile config "no-preload-822299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:55:35.926044  932489 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:55:35.926123  932489 addons.go:70] Setting storage-provisioner=true in profile "no-preload-822299"
	I1229 07:55:35.926147  932489 addons.go:239] Setting addon storage-provisioner=true in "no-preload-822299"
	W1229 07:55:35.926153  932489 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:55:35.926174  932489 addons.go:70] Setting dashboard=true in profile "no-preload-822299"
	I1229 07:55:35.926188  932489 addons.go:239] Setting addon dashboard=true in "no-preload-822299"
	I1229 07:55:35.926182  932489 addons.go:70] Setting default-storageclass=true in profile "no-preload-822299"
	W1229 07:55:35.926194  932489 addons.go:248] addon dashboard should already be in state true
	I1229 07:55:35.926202  932489 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-822299"
	I1229 07:55:35.926214  932489 host.go:66] Checking if "no-preload-822299" exists ...
	I1229 07:55:35.926496  932489 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:55:35.926670  932489 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:55:35.926176  932489 host.go:66] Checking if "no-preload-822299" exists ...
	I1229 07:55:35.928334  932489 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:55:35.932931  932489 out.go:179] * Verifying Kubernetes components...
	I1229 07:55:35.936162  932489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:55:35.984877  932489 addons.go:239] Setting addon default-storageclass=true in "no-preload-822299"
	W1229 07:55:35.984899  932489 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:55:35.984922  932489 host.go:66] Checking if "no-preload-822299" exists ...
	I1229 07:55:35.985359  932489 cli_runner.go:164] Run: docker container inspect no-preload-822299 --format={{.State.Status}}
	I1229 07:55:35.994423  932489 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:55:35.994550  932489 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:55:35.998407  932489 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1229 07:55:35.998515  932489 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:55:35.998526  932489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:55:35.998592  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:36.001316  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:55:36.001349  932489 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:55:36.001433  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:36.028998  932489 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:55:36.029022  932489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:55:36.029086  932489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-822299
	I1229 07:55:36.054442  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:36.069186  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:36.078891  932489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33828 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/no-preload-822299/id_rsa Username:docker}
	I1229 07:55:36.271345  932489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:55:36.296440  932489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:55:36.297048  932489 node_ready.go:35] waiting up to 6m0s for node "no-preload-822299" to be "Ready" ...
	I1229 07:55:36.338245  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:55:36.338312  932489 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:55:36.366491  932489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:55:36.385061  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:55:36.385130  932489 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:55:36.437192  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:55:36.437262  932489 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:55:36.498075  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:55:36.498142  932489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:55:36.534266  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:55:36.534345  932489 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1229 07:55:36.561218  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:55:36.561293  932489 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:55:36.589935  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:55:36.590011  932489 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:55:36.617829  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:55:36.617902  932489 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:55:36.638864  932489 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:55:36.638941  932489 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:55:36.661692  932489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:55:39.152777  932489 node_ready.go:49] node "no-preload-822299" is "Ready"
	I1229 07:55:39.152849  932489 node_ready.go:38] duration metric: took 2.85577505s for node "no-preload-822299" to be "Ready" ...
	I1229 07:55:39.152864  932489 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:55:39.152924  932489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:55:39.336497  932489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.039970658s)
	I1229 07:55:40.322844  932489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.956276701s)
	I1229 07:55:40.322983  932489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.661189936s)
	I1229 07:55:40.323074  932489 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.170134431s)
	I1229 07:55:40.323132  932489 api_server.go:72] duration metric: took 4.397445054s to wait for apiserver process to appear ...
	I1229 07:55:40.323144  932489 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:55:40.323161  932489 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:55:40.326123  932489 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-822299 addons enable metrics-server
	
	I1229 07:55:40.329091  932489 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1229 07:55:40.332020  932489 addons.go:530] duration metric: took 4.405978969s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1229 07:55:40.338690  932489 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:55:40.338715  932489 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:55:40.823279  932489 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:55:40.831438  932489 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1229 07:55:40.832620  932489 api_server.go:141] control plane version: v1.35.0
	I1229 07:55:40.832647  932489 api_server.go:131] duration metric: took 509.496144ms to wait for apiserver health ...
	I1229 07:55:40.832658  932489 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:55:40.835928  932489 system_pods.go:59] 8 kube-system pods found
	I1229 07:55:40.835969  932489 system_pods.go:61] "coredns-7d764666f9-9pqjr" [da4ef017-c306-45e2-b667-f47b190319ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:55:40.835979  932489 system_pods.go:61] "etcd-no-preload-822299" [e18f03c4-83a6-439f-a825-cdf568bbb3fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:55:40.836015  932489 system_pods.go:61] "kindnet-mtpz6" [47b6fe60-126e-4d25-aad6-72f6949ff1b9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:55:40.836030  932489 system_pods.go:61] "kube-apiserver-no-preload-822299" [1e438adc-8bba-4b21-bf75-c35b34fb442f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:55:40.836038  932489 system_pods.go:61] "kube-controller-manager-no-preload-822299" [fdd6fefb-1f32-4ff1-b1c7-18c3254a0cf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:55:40.836049  932489 system_pods.go:61] "kube-proxy-j7ktq" [2a4725d6-5c8f-414a-bbcb-b2313e27dcf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:55:40.836057  932489 system_pods.go:61] "kube-scheduler-no-preload-822299" [3ea6cf1a-d03f-4c5e-8ae3-efe59307782a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:55:40.836090  932489 system_pods.go:61] "storage-provisioner" [ac000c47-e858-4827-9d8d-b6246442cde8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:55:40.836103  932489 system_pods.go:74] duration metric: took 3.438883ms to wait for pod list to return data ...
	I1229 07:55:40.836124  932489 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:55:40.838609  932489 default_sa.go:45] found service account: "default"
	I1229 07:55:40.838634  932489 default_sa.go:55] duration metric: took 2.498724ms for default service account to be created ...
	I1229 07:55:40.838643  932489 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:55:40.841608  932489 system_pods.go:86] 8 kube-system pods found
	I1229 07:55:40.841643  932489 system_pods.go:89] "coredns-7d764666f9-9pqjr" [da4ef017-c306-45e2-b667-f47b190319ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:55:40.841661  932489 system_pods.go:89] "etcd-no-preload-822299" [e18f03c4-83a6-439f-a825-cdf568bbb3fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:55:40.841674  932489 system_pods.go:89] "kindnet-mtpz6" [47b6fe60-126e-4d25-aad6-72f6949ff1b9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:55:40.841685  932489 system_pods.go:89] "kube-apiserver-no-preload-822299" [1e438adc-8bba-4b21-bf75-c35b34fb442f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:55:40.841697  932489 system_pods.go:89] "kube-controller-manager-no-preload-822299" [fdd6fefb-1f32-4ff1-b1c7-18c3254a0cf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:55:40.841711  932489 system_pods.go:89] "kube-proxy-j7ktq" [2a4725d6-5c8f-414a-bbcb-b2313e27dcf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:55:40.841719  932489 system_pods.go:89] "kube-scheduler-no-preload-822299" [3ea6cf1a-d03f-4c5e-8ae3-efe59307782a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:55:40.841727  932489 system_pods.go:89] "storage-provisioner" [ac000c47-e858-4827-9d8d-b6246442cde8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:55:40.841740  932489 system_pods.go:126] duration metric: took 3.091164ms to wait for k8s-apps to be running ...
	I1229 07:55:40.841749  932489 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:55:40.841808  932489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:55:40.856400  932489 system_svc.go:56] duration metric: took 14.641295ms WaitForService to wait for kubelet
	I1229 07:55:40.856475  932489 kubeadm.go:587] duration metric: took 4.930785279s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:55:40.856508  932489 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:55:40.859706  932489 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:55:40.859784  932489 node_conditions.go:123] node cpu capacity is 2
	I1229 07:55:40.859811  932489 node_conditions.go:105] duration metric: took 3.279579ms to run NodePressure ...
	I1229 07:55:40.859839  932489 start.go:242] waiting for startup goroutines ...
	I1229 07:55:40.859883  932489 start.go:247] waiting for cluster config update ...
	I1229 07:55:40.859907  932489 start.go:256] writing updated cluster config ...
	I1229 07:55:40.860229  932489 ssh_runner.go:195] Run: rm -f paused
	I1229 07:55:40.864137  932489 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:55:40.868557  932489 pod_ready.go:83] waiting for pod "coredns-7d764666f9-9pqjr" in "kube-system" namespace to be "Ready" or be gone ...
	W1229 07:55:42.880390  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:45.374281  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:47.374900  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:49.883830  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:52.374455  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:54.875300  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:57.373808  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:55:59.374438  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:56:01.374583  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:56:03.874010  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:56:06.374271  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:56:08.873819  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:56:10.873858  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	W1229 07:56:12.874597  932489 pod_ready.go:104] pod "coredns-7d764666f9-9pqjr" is not "Ready", error: <nil>
	I1229 07:56:13.374904  932489 pod_ready.go:94] pod "coredns-7d764666f9-9pqjr" is "Ready"
	I1229 07:56:13.374934  932489 pod_ready.go:86] duration metric: took 32.506304502s for pod "coredns-7d764666f9-9pqjr" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.377855  932489 pod_ready.go:83] waiting for pod "etcd-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.382546  932489 pod_ready.go:94] pod "etcd-no-preload-822299" is "Ready"
	I1229 07:56:13.382574  932489 pod_ready.go:86] duration metric: took 4.691872ms for pod "etcd-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.385125  932489 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.390218  932489 pod_ready.go:94] pod "kube-apiserver-no-preload-822299" is "Ready"
	I1229 07:56:13.390257  932489 pod_ready.go:86] duration metric: took 5.10918ms for pod "kube-apiserver-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.392573  932489 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.572977  932489 pod_ready.go:94] pod "kube-controller-manager-no-preload-822299" is "Ready"
	I1229 07:56:13.573005  932489 pod_ready.go:86] duration metric: took 180.399648ms for pod "kube-controller-manager-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:13.772996  932489 pod_ready.go:83] waiting for pod "kube-proxy-j7ktq" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:14.174113  932489 pod_ready.go:94] pod "kube-proxy-j7ktq" is "Ready"
	I1229 07:56:14.174185  932489 pod_ready.go:86] duration metric: took 401.158316ms for pod "kube-proxy-j7ktq" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:14.373777  932489 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:14.772843  932489 pod_ready.go:94] pod "kube-scheduler-no-preload-822299" is "Ready"
	I1229 07:56:14.772873  932489 pod_ready.go:86] duration metric: took 399.068129ms for pod "kube-scheduler-no-preload-822299" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:56:14.772886  932489 pod_ready.go:40] duration metric: took 33.908671326s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:56:14.831165  932489 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1229 07:56:14.834226  932489 out.go:203] 
	W1229 07:56:14.837062  932489 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1229 07:56:14.840042  932489 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:56:14.842947  932489 out.go:179] * Done! kubectl is now configured to use "no-preload-822299" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 29 07:56:11 no-preload-822299 crio[664]: time="2025-12-29T07:56:11.293996986Z" level=info msg="Created container 1ac564eb3b678f5d18e79787bddd400f6078af4d4aa2ee610fd521cfc8c5ae9f: kube-system/storage-provisioner/storage-provisioner" id=e3d31caf-ad04-43e6-8e45-a57c6d1f8717 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:56:11 no-preload-822299 crio[664]: time="2025-12-29T07:56:11.295077322Z" level=info msg="Starting container: 1ac564eb3b678f5d18e79787bddd400f6078af4d4aa2ee610fd521cfc8c5ae9f" id=bf51766b-7f4e-48f6-af24-af2d9582f057 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:56:11 no-preload-822299 crio[664]: time="2025-12-29T07:56:11.298759456Z" level=info msg="Started container" PID=1684 containerID=1ac564eb3b678f5d18e79787bddd400f6078af4d4aa2ee610fd521cfc8c5ae9f description=kube-system/storage-provisioner/storage-provisioner id=bf51766b-7f4e-48f6-af24-af2d9582f057 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2a4c1545ee32dce5bcab90a458c70db246beaf244e92cebd95bdcc4097234cb4
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.776811113Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.776849012Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.782644564Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.78268033Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.787181383Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.787216435Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.787542788Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.793057337Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:56:20 no-preload-822299 crio[664]: time="2025-12-29T07:56:20.793093415Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.086719696Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c8202fb2-fd56-4ffc-942b-9fd455bbe1dc name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.08798144Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=32e97173-acfa-4541-a7d2-ab95a5c1cc2b name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.089216911Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf/dashboard-metrics-scraper" id=3b6882bb-a73a-455e-a38c-672148bf4b0f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.089348736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.096612072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.097181028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.115099075Z" level=info msg="Created container 242d9128050d7dc895a326106af73fe7da1e4aad58427230a10b1e47ec73cfd9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf/dashboard-metrics-scraper" id=3b6882bb-a73a-455e-a38c-672148bf4b0f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.115910986Z" level=info msg="Starting container: 242d9128050d7dc895a326106af73fe7da1e4aad58427230a10b1e47ec73cfd9" id=f9b76773-5219-4d65-8524-1c3c688ae49a name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.118459Z" level=info msg="Started container" PID=1757 containerID=242d9128050d7dc895a326106af73fe7da1e4aad58427230a10b1e47ec73cfd9 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf/dashboard-metrics-scraper id=f9b76773-5219-4d65-8524-1c3c688ae49a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b7a5e83c6ddb881c8750a90c1082f639154a69abd0b7828a5f49b5edebc941ec
	Dec 29 07:56:23 no-preload-822299 conmon[1755]: conmon 242d9128050d7dc895a3 <ninfo>: container 1757 exited with status 1
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.291513376Z" level=info msg="Removing container: be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f" id=1213aabe-f73a-4bfb-87e2-67db4220b935 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.29946644Z" level=info msg="Error loading conmon cgroup of container be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f: cgroup deleted" id=1213aabe-f73a-4bfb-87e2-67db4220b935 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:56:23 no-preload-822299 crio[664]: time="2025-12-29T07:56:23.302717293Z" level=info msg="Removed container be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf/dashboard-metrics-scraper" id=1213aabe-f73a-4bfb-87e2-67db4220b935 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	242d9128050d7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   b7a5e83c6ddb8       dashboard-metrics-scraper-867fb5f87b-msgsf   kubernetes-dashboard
	1ac564eb3b678       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago      Running             storage-provisioner         2                   2a4c1545ee32d       storage-provisioner                          kube-system
	8b4305f000816       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago      Running             kubernetes-dashboard        0                   edb7d744b8aa5       kubernetes-dashboard-b84665fb8-w5gsg         kubernetes-dashboard
	58e48a40d8762       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   5e92efaee01f4       busybox                                      default
	207de0eff4ea6       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           51 seconds ago      Running             coredns                     1                   aaf81f5bc1730       coredns-7d764666f9-9pqjr                     kube-system
	027c0db3fe162       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           51 seconds ago      Exited              storage-provisioner         1                   2a4c1545ee32d       storage-provisioner                          kube-system
	8c7aaa2e263d6       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           51 seconds ago      Running             kube-proxy                  1                   1856b9a3521f5       kube-proxy-j7ktq                             kube-system
	4b778416881b6       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           51 seconds ago      Running             kindnet-cni                 1                   60564c4e90ddf       kindnet-mtpz6                                kube-system
	6032c154ef816       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           56 seconds ago      Running             etcd                        1                   0f3129b3fb9d2       etcd-no-preload-822299                       kube-system
	41a7706d81e9c       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           56 seconds ago      Running             kube-controller-manager     1                   3b315b6cd9c17       kube-controller-manager-no-preload-822299    kube-system
	0089246ff4de9       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           56 seconds ago      Running             kube-apiserver              1                   bdd95c369f8f3       kube-apiserver-no-preload-822299             kube-system
	b07cb6606d4da       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           56 seconds ago      Running             kube-scheduler              1                   e018df3acbbfd       kube-scheduler-no-preload-822299             kube-system
	
	
	==> coredns [207de0eff4ea65084cae90279e02da02d1295619882838059e4fbfcf26208f18] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:50471 - 58572 "HINFO IN 6307879131506065131.6569118893440573743. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011908091s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-822299
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-822299
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=no-preload-822299
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_54_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:54:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-822299
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:56:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:56:10 +0000   Mon, 29 Dec 2025 07:54:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:56:10 +0000   Mon, 29 Dec 2025 07:54:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:56:10 +0000   Mon, 29 Dec 2025 07:54:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:56:10 +0000   Mon, 29 Dec 2025 07:55:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-822299
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                19ace3de-03e9-4b9a-9295-4970de68f6fd
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-9pqjr                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-no-preload-822299                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         112s
	  kube-system                 kindnet-mtpz6                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-822299              250m (12%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-no-preload-822299     200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-j7ktq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-822299              100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-msgsf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-w5gsg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  107s  node-controller  Node no-preload-822299 event: Registered Node no-preload-822299 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node no-preload-822299 event: Registered Node no-preload-822299 in Controller
	
	
	==> dmesg <==
	[Dec29 07:26] overlayfs: idmapped layers are currently not supported
	[Dec29 07:29] overlayfs: idmapped layers are currently not supported
	[Dec29 07:30] overlayfs: idmapped layers are currently not supported
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	[Dec29 07:54] overlayfs: idmapped layers are currently not supported
	[Dec29 07:55] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6032c154ef8167005227b5440b39d68c2447ec5f8c0c91fd788340c8ee16c989] <==
	{"level":"info","ts":"2025-12-29T07:55:35.879619Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:55:35.879738Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:55:35.888651Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-29T07:55:35.893382Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:55:35.893554Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:55:35.896921Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:55:35.896950Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:55:36.720832Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:55:36.722903Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:55:36.723000Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:55:36.723059Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:55:36.723100Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:55:36.724830Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:55:36.724901Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:55:36.724943Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:55:36.724978Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:55:36.728147Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-822299 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:55:36.728221Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:55:36.728285Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:55:36.729359Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:55:36.731161Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-29T07:55:36.735562Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:55:36.740924Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:55:36.736798Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:55:36.741796Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 07:56:32 up  4:39,  0 user,  load average: 1.29, 1.77, 2.22
	Linux no-preload-822299 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4b778416881b60e07179255537ddb5371244f603b8bc5d645105c9f71226af14] <==
	I1229 07:55:40.564656       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:55:40.655819       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:55:40.656015       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:55:40.656062       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:55:40.656116       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:55:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:55:40.764723       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:55:40.764752       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:55:40.764769       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:55:40.765932       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1229 07:56:10.764928       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1229 07:56:10.766057       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1229 07:56:10.766113       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1229 07:56:10.766196       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1229 07:56:11.965933       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:56:11.965965       1 metrics.go:72] Registering metrics
	I1229 07:56:11.966040       1 controller.go:711] "Syncing nftables rules"
	I1229 07:56:20.770707       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:56:20.770746       1 main.go:301] handling current node
	I1229 07:56:30.768868       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:56:30.768897       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0089246ff4de942bd19f5b3b44618b8533216e0a07a0df9f128c470bc1cfe64d] <==
	I1229 07:55:39.361759       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:55:39.361744       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:39.362287       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:55:39.365433       1 policy_source.go:248] refreshing policies
	I1229 07:55:39.362367       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1229 07:55:39.361769       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:39.380016       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1229 07:55:39.386131       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:55:39.386407       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:55:39.386739       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:55:39.386796       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:55:39.386828       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:55:39.398348       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1229 07:55:39.411110       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:55:39.783516       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:55:39.821899       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:55:39.849259       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:55:39.865324       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:55:39.866926       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:55:39.883729       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:55:39.962020       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.228.189"}
	I1229 07:55:39.989407       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.142.107"}
	I1229 07:55:42.671226       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:55:42.871049       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:55:42.921292       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [41a7706d81e9c15097d3a00866892e44a875935df4795885b8436c4c59bb8fa8] <==
	I1229 07:55:42.294735       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.294758       1 range_allocator.go:177] "Sending events to api server"
	I1229 07:55:42.294791       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1229 07:55:42.294801       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:55:42.294805       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.294889       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:55:42.294927       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.295072       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.295079       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.295137       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.295112       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.296480       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.295121       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.298945       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.294917       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.295128       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.299609       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.349257       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.395390       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.397589       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:42.397704       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:55:42.397736       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:55:42.875160       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1229 07:55:42.881175       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1229 07:55:42.881265       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [8c7aaa2e263d66f365a99d3972aed68fc1fca62757f4eb3b373fe85d0cbf3f11] <==
	I1229 07:55:40.651411       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:55:40.750377       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:55:40.850989       1 shared_informer.go:377] "Caches are synced"
	I1229 07:55:40.851074       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1229 07:55:40.851167       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:55:40.887486       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:55:40.887541       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:55:40.891024       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:55:40.891316       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:55:40.891342       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:55:40.895219       1 config.go:200] "Starting service config controller"
	I1229 07:55:40.895735       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:55:40.896238       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:55:40.896254       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:55:40.896268       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:55:40.896272       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:55:40.900030       1 config.go:309] "Starting node config controller"
	I1229 07:55:40.900125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:55:40.900157       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:55:40.996928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:55:40.996961       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:55:40.996929       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b07cb6606d4da3395c3f8cff43125cc60ee39b88d0e4ebd07629beb28ba19a59] <==
	I1229 07:55:37.524808       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:55:39.005518       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:55:39.005556       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:55:39.005567       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:55:39.005575       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:55:39.243260       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:55:39.243289       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:55:39.254080       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:55:39.254109       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:55:39.259055       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:55:39.259106       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:55:39.455158       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:55:50 no-preload-822299 kubelet[785]: E1229 07:55:50.194006     785 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-822299" containerName="kube-apiserver"
	Dec 29 07:55:52 no-preload-822299 kubelet[785]: E1229 07:55:52.199528     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-w5gsg" containerName="kubernetes-dashboard"
	Dec 29 07:55:52 no-preload-822299 kubelet[785]: I1229 07:55:52.214559     785 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-w5gsg" podStartSLOduration=1.775375251 podStartE2EDuration="10.214545031s" podCreationTimestamp="2025-12-29 07:55:42 +0000 UTC" firstStartedPulling="2025-12-29 07:55:43.165588578 +0000 UTC m=+8.226622191" lastFinishedPulling="2025-12-29 07:55:51.604758367 +0000 UTC m=+16.665791971" observedRunningTime="2025-12-29 07:55:52.213379559 +0000 UTC m=+17.274413164" watchObservedRunningTime="2025-12-29 07:55:52.214545031 +0000 UTC m=+17.275578636"
	Dec 29 07:55:53 no-preload-822299 kubelet[785]: E1229 07:55:53.201360     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-w5gsg" containerName="kubernetes-dashboard"
	Dec 29 07:55:54 no-preload-822299 kubelet[785]: E1229 07:55:54.294017     785 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-822299" containerName="kube-controller-manager"
	Dec 29 07:55:58 no-preload-822299 kubelet[785]: E1229 07:55:58.085494     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" containerName="dashboard-metrics-scraper"
	Dec 29 07:55:58 no-preload-822299 kubelet[785]: I1229 07:55:58.085548     785 scope.go:122] "RemoveContainer" containerID="9433232180bc4a8bf1784a4e6bded6091abc98b786b7f8cc8df05f2ee732dd98"
	Dec 29 07:55:58 no-preload-822299 kubelet[785]: I1229 07:55:58.215163     785 scope.go:122] "RemoveContainer" containerID="9433232180bc4a8bf1784a4e6bded6091abc98b786b7f8cc8df05f2ee732dd98"
	Dec 29 07:55:58 no-preload-822299 kubelet[785]: E1229 07:55:58.215701     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" containerName="dashboard-metrics-scraper"
	Dec 29 07:55:58 no-preload-822299 kubelet[785]: I1229 07:55:58.215811     785 scope.go:122] "RemoveContainer" containerID="be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f"
	Dec 29 07:55:58 no-preload-822299 kubelet[785]: E1229 07:55:58.216101     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-msgsf_kubernetes-dashboard(fe6e803c-6769-42f9-a71e-999b6cfd8085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" podUID="fe6e803c-6769-42f9-a71e-999b6cfd8085"
	Dec 29 07:55:59 no-preload-822299 kubelet[785]: E1229 07:55:59.806781     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" containerName="dashboard-metrics-scraper"
	Dec 29 07:55:59 no-preload-822299 kubelet[785]: I1229 07:55:59.807335     785 scope.go:122] "RemoveContainer" containerID="be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f"
	Dec 29 07:55:59 no-preload-822299 kubelet[785]: E1229 07:55:59.807586     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-msgsf_kubernetes-dashboard(fe6e803c-6769-42f9-a71e-999b6cfd8085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" podUID="fe6e803c-6769-42f9-a71e-999b6cfd8085"
	Dec 29 07:56:11 no-preload-822299 kubelet[785]: I1229 07:56:11.257642     785 scope.go:122] "RemoveContainer" containerID="027c0db3fe16212396a84ea9fbe3ce1fcded7ccde2a4c9332868ffe62e4474ed"
	Dec 29 07:56:13 no-preload-822299 kubelet[785]: E1229 07:56:13.154778     785 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-9pqjr" containerName="coredns"
	Dec 29 07:56:23 no-preload-822299 kubelet[785]: E1229 07:56:23.086243     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" containerName="dashboard-metrics-scraper"
	Dec 29 07:56:23 no-preload-822299 kubelet[785]: I1229 07:56:23.086284     785 scope.go:122] "RemoveContainer" containerID="be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f"
	Dec 29 07:56:23 no-preload-822299 kubelet[785]: I1229 07:56:23.289248     785 scope.go:122] "RemoveContainer" containerID="be69daa18533ca656c19bebbd5597114f1a4aa8433d14a9a86f6b381ef7d6d8f"
	Dec 29 07:56:23 no-preload-822299 kubelet[785]: E1229 07:56:23.289690     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" containerName="dashboard-metrics-scraper"
	Dec 29 07:56:23 no-preload-822299 kubelet[785]: I1229 07:56:23.290532     785 scope.go:122] "RemoveContainer" containerID="242d9128050d7dc895a326106af73fe7da1e4aad58427230a10b1e47ec73cfd9"
	Dec 29 07:56:23 no-preload-822299 kubelet[785]: E1229 07:56:23.290827     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-msgsf_kubernetes-dashboard(fe6e803c-6769-42f9-a71e-999b6cfd8085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-msgsf" podUID="fe6e803c-6769-42f9-a71e-999b6cfd8085"
	Dec 29 07:56:27 no-preload-822299 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:56:27 no-preload-822299 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:56:27 no-preload-822299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [8b4305f000816e5647e84ad1081d4c1e587f0138a84b81bb87dd316602494d5c] <==
	2025/12/29 07:55:51 Using namespace: kubernetes-dashboard
	2025/12/29 07:55:51 Using in-cluster config to connect to apiserver
	2025/12/29 07:55:51 Using secret token for csrf signing
	2025/12/29 07:55:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:55:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:55:51 Successful initial request to the apiserver, version: v1.35.0
	2025/12/29 07:55:51 Generating JWE encryption key
	2025/12/29 07:55:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:55:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:55:51 Initializing JWE encryption key from synchronized object
	2025/12/29 07:55:51 Creating in-cluster Sidecar client
	2025/12/29 07:55:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:55:51 Serving insecurely on HTTP port: 9090
	2025/12/29 07:56:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:55:51 Starting overwatch
	
	
	==> storage-provisioner [027c0db3fe16212396a84ea9fbe3ce1fcded7ccde2a4c9332868ffe62e4474ed] <==
	I1229 07:55:40.640185       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:56:10.642572       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [1ac564eb3b678f5d18e79787bddd400f6078af4d4aa2ee610fd521cfc8c5ae9f] <==
	I1229 07:56:11.309475       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:56:11.322132       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:56:11.322200       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:56:11.324413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:14.780772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:19.041730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:22.640513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:25.694472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:28.717040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:28.724911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:56:28.725102       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:56:28.725483       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40fb3eac-348b-44c6-ae27-7f5b53dd11a0", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-822299_1f3396cc-24ef-4c60-adfd-abd09ea7a20d became leader
	I1229 07:56:28.725566       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-822299_1f3396cc-24ef-4c60-adfd-abd09ea7a20d!
	W1229 07:56:28.728548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:28.739418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:56:28.826495       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-822299_1f3396cc-24ef-4c60-adfd-abd09ea7a20d!
	W1229 07:56:30.745383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:30.752383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:32.756483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:56:32.763217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-822299 -n no-preload-822299
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-822299 -n no-preload-822299: exit status 2 (349.252285ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-822299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-664524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-664524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (253.529505ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:57:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-664524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-664524 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-664524 describe deploy/metrics-server -n kube-system: exit status 1 (88.270062ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-664524 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-664524
helpers_test.go:244: (dbg) docker inspect embed-certs-664524:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1",
	        "Created": "2025-12-29T07:56:41.560040966Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 937065,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:56:41.618923478Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/hosts",
	        "LogPath": "/var/lib/docker/containers/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1-json.log",
	        "Name": "/embed-certs-664524",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-664524:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-664524",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1",
	                "LowerDir": "/var/lib/docker/overlay2/5df02a147b0320621df1a2e8182e4dde46954dd5e1dd1a32299637c80ec0f91b-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5df02a147b0320621df1a2e8182e4dde46954dd5e1dd1a32299637c80ec0f91b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5df02a147b0320621df1a2e8182e4dde46954dd5e1dd1a32299637c80ec0f91b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5df02a147b0320621df1a2e8182e4dde46954dd5e1dd1a32299637c80ec0f91b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-664524",
	                "Source": "/var/lib/docker/volumes/embed-certs-664524/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-664524",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-664524",
	                "name.minikube.sigs.k8s.io": "embed-certs-664524",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f060ae2809d4f880aab28745a62e69151e60fe18f21472cb5db5e0b2100a80ba",
	            "SandboxKey": "/var/run/docker/netns/f060ae2809d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-664524": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:21:c8:35:62:b3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b2572828bf01c704dfae705c3731b5490ef1a8c7071b9170453d61e871f33f10",
	                    "EndpointID": "d5e861d3b614fb65ea015442ad5f7c5013b935a32ce3334a2dc2ee1520ac8b20",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-664524",
	                        "ece150c04980"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-664524 -n embed-certs-664524
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-664524 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-664524 logs -n 25: (1.193368568s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ delete  │ -p force-systemd-env-002562                                                                                                                                                                                                                   │ force-systemd-env-002562 │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p cert-options-600463 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-600463      │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ cert-options-600463 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-600463      │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ ssh     │ -p cert-options-600463 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-600463      │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ delete  │ -p cert-options-600463                                                                                                                                                                                                                        │ cert-options-600463      │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:51 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867   │ jenkins │ v1.37.0 │ 29 Dec 25 07:51 UTC │ 29 Dec 25 07:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-512867 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-512867   │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │                     │
	│ stop    │ -p old-k8s-version-512867 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-512867   │ jenkins │ v1.37.0 │ 29 Dec 25 07:52 UTC │ 29 Dec 25 07:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-512867 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-512867   │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867   │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ image   │ old-k8s-version-512867 image list --format=json                                                                                                                                                                                               │ old-k8s-version-512867   │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ pause   │ -p old-k8s-version-512867 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-512867   │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │                     │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867   │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867   │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299        │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-822299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-822299        │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │                     │
	│ stop    │ -p no-preload-822299 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-822299        │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable dashboard -p no-preload-822299 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-822299        │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299        │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:56 UTC │
	│ image   │ no-preload-822299 image list --format=json                                                                                                                                                                                                    │ no-preload-822299        │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ pause   │ -p no-preload-822299 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-822299        │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │                     │
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299        │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299        │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524       │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-664524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-664524       │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:56:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:56:36.492984  936640 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:56:36.493178  936640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:56:36.493192  936640 out.go:374] Setting ErrFile to fd 2...
	I1229 07:56:36.493199  936640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:56:36.493508  936640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:56:36.493972  936640 out.go:368] Setting JSON to false
	I1229 07:56:36.495362  936640 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16748,"bootTime":1766978248,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:56:36.495467  936640 start.go:143] virtualization:  
	I1229 07:56:36.499836  936640 out.go:179] * [embed-certs-664524] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:56:36.506427  936640 notify.go:221] Checking for updates...
	I1229 07:56:36.507021  936640 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:56:36.510580  936640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:56:36.513990  936640 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:56:36.519101  936640 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:56:36.522281  936640 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:56:36.525539  936640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:56:36.529499  936640 config.go:182] Loaded profile config "force-systemd-flag-122604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:56:36.529669  936640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:56:36.563881  936640 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:56:36.564010  936640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:56:36.630246  936640 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:56:36.620387522 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:56:36.630356  936640 docker.go:319] overlay module found
	I1229 07:56:36.633659  936640 out.go:179] * Using the docker driver based on user configuration
	I1229 07:56:36.636711  936640 start.go:309] selected driver: docker
	I1229 07:56:36.636729  936640 start.go:928] validating driver "docker" against <nil>
	I1229 07:56:36.636743  936640 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:56:36.637599  936640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:56:36.707383  936640 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:56:36.697448792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:56:36.707535  936640 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:56:36.707767  936640 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:56:36.710999  936640 out.go:179] * Using Docker driver with root privileges
	I1229 07:56:36.713987  936640 cni.go:84] Creating CNI manager for ""
	I1229 07:56:36.714056  936640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:56:36.714078  936640 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:56:36.714162  936640 start.go:353] cluster config:
	{Name:embed-certs-664524 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-664524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:56:36.717549  936640 out.go:179] * Starting "embed-certs-664524" primary control-plane node in "embed-certs-664524" cluster
	I1229 07:56:36.720494  936640 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:56:36.723517  936640 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:56:36.726533  936640 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:56:36.726583  936640 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:56:36.726595  936640 cache.go:65] Caching tarball of preloaded images
	I1229 07:56:36.726630  936640 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:56:36.726682  936640 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:56:36.726693  936640 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:56:36.726803  936640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/config.json ...
	I1229 07:56:36.726820  936640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/config.json: {Name:mk53aa0c194f60ac8a24603041d449f2aee50e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:56:36.749045  936640 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:56:36.749068  936640 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:56:36.749089  936640 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:56:36.749124  936640 start.go:360] acquireMachinesLock for embed-certs-664524: {Name:mk3e68f4ad1434f78982be79fcb8ab8f3f5e9bab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:56:36.749239  936640 start.go:364] duration metric: took 94.639µs to acquireMachinesLock for "embed-certs-664524"
	I1229 07:56:36.749270  936640 start.go:93] Provisioning new machine with config: &{Name:embed-certs-664524 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-664524 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:56:36.749337  936640 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:56:36.753051  936640 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:56:36.753317  936640 start.go:159] libmachine.API.Create for "embed-certs-664524" (driver="docker")
	I1229 07:56:36.753358  936640 client.go:173] LocalClient.Create starting
	I1229 07:56:36.753426  936640 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 07:56:36.753467  936640 main.go:144] libmachine: Decoding PEM data...
	I1229 07:56:36.753499  936640 main.go:144] libmachine: Parsing certificate...
	I1229 07:56:36.753555  936640 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 07:56:36.753583  936640 main.go:144] libmachine: Decoding PEM data...
	I1229 07:56:36.753598  936640 main.go:144] libmachine: Parsing certificate...
	I1229 07:56:36.753975  936640 cli_runner.go:164] Run: docker network inspect embed-certs-664524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:56:36.772989  936640 cli_runner.go:211] docker network inspect embed-certs-664524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:56:36.773097  936640 network_create.go:284] running [docker network inspect embed-certs-664524] to gather additional debugging logs...
	I1229 07:56:36.773117  936640 cli_runner.go:164] Run: docker network inspect embed-certs-664524
	W1229 07:56:36.789840  936640 cli_runner.go:211] docker network inspect embed-certs-664524 returned with exit code 1
	I1229 07:56:36.789869  936640 network_create.go:287] error running [docker network inspect embed-certs-664524]: docker network inspect embed-certs-664524: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-664524 not found
	I1229 07:56:36.789882  936640 network_create.go:289] output of [docker network inspect embed-certs-664524]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-664524 not found
	
	** /stderr **
	I1229 07:56:36.789970  936640 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:56:36.806018  936640 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
	I1229 07:56:36.806361  936640 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce89f48ecd30 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:ee:74:81:93:1a} reservation:<nil>}
	I1229 07:56:36.806732  936640 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e2ff2b4ebfe IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:c7:24:a7:9b:5d} reservation:<nil>}
	I1229 07:56:36.806988  936640 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-714d1dc9bdce IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:13:37:2f:fc:8f} reservation:<nil>}
	I1229 07:56:36.807427  936640 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a523d0}
	I1229 07:56:36.807452  936640 network_create.go:124] attempt to create docker network embed-certs-664524 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 07:56:36.807510  936640 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-664524 embed-certs-664524
	I1229 07:56:36.863602  936640 network_create.go:108] docker network embed-certs-664524 192.168.85.0/24 created
	I1229 07:56:36.863639  936640 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-664524" container
	I1229 07:56:36.863720  936640 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:56:36.879702  936640 cli_runner.go:164] Run: docker volume create embed-certs-664524 --label name.minikube.sigs.k8s.io=embed-certs-664524 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:56:36.897954  936640 oci.go:103] Successfully created a docker volume embed-certs-664524
	I1229 07:56:36.898063  936640 cli_runner.go:164] Run: docker run --rm --name embed-certs-664524-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-664524 --entrypoint /usr/bin/test -v embed-certs-664524:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:56:37.477470  936640 oci.go:107] Successfully prepared a docker volume embed-certs-664524
	I1229 07:56:37.477558  936640 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:56:37.477569  936640 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:56:37.477653  936640 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-664524:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:56:41.490732  936640 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-664524:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (4.013038094s)
	I1229 07:56:41.490770  936640 kic.go:203] duration metric: took 4.013196996s to extract preloaded images to volume ...
	W1229 07:56:41.490920  936640 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:56:41.491035  936640 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:56:41.544280  936640 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-664524 --name embed-certs-664524 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-664524 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-664524 --network embed-certs-664524 --ip 192.168.85.2 --volume embed-certs-664524:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:56:41.835077  936640 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Running}}
	I1229 07:56:41.858891  936640 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Status}}
	I1229 07:56:41.887791  936640 cli_runner.go:164] Run: docker exec embed-certs-664524 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:56:41.937744  936640 oci.go:144] the created container "embed-certs-664524" has a running status.
	I1229 07:56:41.937772  936640 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa...
	I1229 07:56:42.257243  936640 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:56:42.285612  936640 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Status}}
	I1229 07:56:42.324531  936640 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:56:42.324557  936640 kic_runner.go:114] Args: [docker exec --privileged embed-certs-664524 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:56:42.396980  936640 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Status}}
	I1229 07:56:42.421811  936640 machine.go:94] provisionDockerMachine start ...
	I1229 07:56:42.421896  936640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:56:42.445656  936640 main.go:144] libmachine: Using SSH client type: native
	I1229 07:56:42.445999  936640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33833 <nil> <nil>}
	I1229 07:56:42.446014  936640 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:56:42.446649  936640 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41346->127.0.0.1:33833: read: connection reset by peer
	I1229 07:56:45.604739  936640 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-664524
	
	I1229 07:56:45.604768  936640 ubuntu.go:182] provisioning hostname "embed-certs-664524"
	I1229 07:56:45.604865  936640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:56:45.622248  936640 main.go:144] libmachine: Using SSH client type: native
	I1229 07:56:45.622585  936640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33833 <nil> <nil>}
	I1229 07:56:45.622603  936640 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-664524 && echo "embed-certs-664524" | sudo tee /etc/hostname
	I1229 07:56:45.786844  936640 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-664524
	
	I1229 07:56:45.786946  936640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:56:45.806079  936640 main.go:144] libmachine: Using SSH client type: native
	I1229 07:56:45.806399  936640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33833 <nil> <nil>}
	I1229 07:56:45.806422  936640 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-664524' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-664524/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-664524' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:56:45.957186  936640 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:56:45.957213  936640 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:56:45.957234  936640 ubuntu.go:190] setting up certificates
	I1229 07:56:45.957242  936640 provision.go:84] configureAuth start
	I1229 07:56:45.957320  936640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-664524
	I1229 07:56:45.980424  936640 provision.go:143] copyHostCerts
	I1229 07:56:45.980520  936640 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:56:45.980529  936640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:56:45.980611  936640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:56:45.980700  936640 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:56:45.980705  936640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:56:45.980731  936640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:56:45.980808  936640 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:56:45.980814  936640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:56:45.980841  936640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:56:45.980891  936640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.embed-certs-664524 san=[127.0.0.1 192.168.85.2 embed-certs-664524 localhost minikube]
	I1229 07:56:46.083066  936640 provision.go:177] copyRemoteCerts
	I1229 07:56:46.083147  936640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:56:46.083219  936640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:56:46.100764  936640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33833 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:56:46.210092  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:56:46.230829  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:56:46.251682  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:56:46.273396  936640 provision.go:87] duration metric: took 316.131643ms to configureAuth
	I1229 07:56:46.273426  936640 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:56:46.273617  936640 config.go:182] Loaded profile config "embed-certs-664524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:56:46.273730  936640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:56:46.292658  936640 main.go:144] libmachine: Using SSH client type: native
	I1229 07:56:46.293049  936640 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33833 <nil> <nil>}
	I1229 07:56:46.293074  936640 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:56:46.585424  936640 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:56:46.585467  936640 machine.go:97] duration metric: took 4.163627597s to provisionDockerMachine
	I1229 07:56:46.585478  936640 client.go:176] duration metric: took 9.832114033s to LocalClient.Create
	I1229 07:56:46.585497  936640 start.go:167] duration metric: took 9.832180692s to libmachine.API.Create "embed-certs-664524"
	I1229 07:56:46.585510  936640 start.go:293] postStartSetup for "embed-certs-664524" (driver="docker")
	I1229 07:56:46.585520  936640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:56:46.585606  936640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:56:46.585668  936640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:56:46.603835  936640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33833 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:56:46.713216  936640 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:56:46.716712  936640 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:56:46.716842  936640 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:56:46.716881  936640 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:56:46.716942  936640 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:56:46.717023  936640 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:56:46.717152  936640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:56:46.725136  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:56:46.744530  936640 start.go:296] duration metric: took 158.988044ms for postStartSetup
	I1229 07:56:46.744924  936640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-664524
	I1229 07:56:46.762282  936640 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/config.json ...
	I1229 07:56:46.762574  936640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:56:46.762632  936640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:56:46.782493  936640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33833 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:56:46.886420  936640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:56:46.891503  936640 start.go:128] duration metric: took 10.142150216s to createHost
	I1229 07:56:46.891531  936640 start.go:83] releasing machines lock for "embed-certs-664524", held for 10.142277626s
	I1229 07:56:46.891607  936640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-664524
	I1229 07:56:46.909647  936640 ssh_runner.go:195] Run: cat /version.json
	I1229 07:56:46.909708  936640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:56:46.909731  936640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:56:46.909791  936640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:56:46.930764  936640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33833 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:56:46.953742  936640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33833 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:56:47.147337  936640 ssh_runner.go:195] Run: systemctl --version
	I1229 07:56:47.154027  936640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:56:47.191437  936640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:56:47.195920  936640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:56:47.196011  936640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:56:47.225246  936640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:56:47.225311  936640 start.go:496] detecting cgroup driver to use...
	I1229 07:56:47.225361  936640 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:56:47.225438  936640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:56:47.244267  936640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:56:47.257244  936640 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:56:47.257352  936640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:56:47.273923  936640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:56:47.292867  936640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:56:47.414869  936640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:56:47.540865  936640 docker.go:234] disabling docker service ...
	I1229 07:56:47.540928  936640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:56:47.563322  936640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:56:47.577117  936640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:56:47.707322  936640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:56:47.833847  936640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:56:47.847131  936640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:56:47.861585  936640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:56:47.861703  936640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:56:47.870422  936640 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:56:47.870552  936640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:56:47.879333  936640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:56:47.888625  936640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:56:47.898304  936640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:56:47.906418  936640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:56:47.915055  936640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:56:47.928303  936640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:56:47.937390  936640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:56:47.944987  936640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:56:47.952966  936640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:56:48.066767  936640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:56:48.253051  936640 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:56:48.253137  936640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:56:48.257544  936640 start.go:574] Will wait 60s for crictl version
	I1229 07:56:48.257609  936640 ssh_runner.go:195] Run: which crictl
	I1229 07:56:48.261478  936640 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:56:48.288973  936640 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:56:48.289121  936640 ssh_runner.go:195] Run: crio --version
	I1229 07:56:48.316071  936640 ssh_runner.go:195] Run: crio --version
	I1229 07:56:48.349072  936640 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:56:48.351902  936640 cli_runner.go:164] Run: docker network inspect embed-certs-664524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:56:48.368086  936640 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:56:48.371973  936640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:56:48.381374  936640 kubeadm.go:884] updating cluster {Name:embed-certs-664524 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-664524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:56:48.381505  936640 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:56:48.381561  936640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:56:48.419510  936640 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:56:48.419536  936640 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:56:48.419591  936640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:56:48.445574  936640 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:56:48.445599  936640 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:56:48.445607  936640 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1229 07:56:48.445690  936640 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-664524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-664524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:56:48.450017  936640 ssh_runner.go:195] Run: crio config
	I1229 07:56:48.542378  936640 cni.go:84] Creating CNI manager for ""
	I1229 07:56:48.542403  936640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:56:48.542421  936640 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:56:48.542445  936640 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-664524 NodeName:embed-certs-664524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:56:48.542585  936640 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-664524"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:56:48.542660  936640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:56:48.551554  936640 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:56:48.551627  936640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:56:48.559653  936640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1229 07:56:48.573461  936640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:56:48.586856  936640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1229 07:56:48.600354  936640 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:56:48.604044  936640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:56:48.614063  936640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:56:48.738730  936640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:56:48.755715  936640 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524 for IP: 192.168.85.2
	I1229 07:56:48.755735  936640 certs.go:195] generating shared ca certs ...
	I1229 07:56:48.755751  936640 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:56:48.755927  936640 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:56:48.755990  936640 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:56:48.756007  936640 certs.go:257] generating profile certs ...
	I1229 07:56:48.756077  936640 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/client.key
	I1229 07:56:48.756113  936640 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/client.crt with IP's: []
	I1229 07:56:48.969353  936640 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/client.crt ...
	I1229 07:56:48.969387  936640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/client.crt: {Name:mk8c1ca4d847503c928df055c7b26b7470602cca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:56:48.969594  936640 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/client.key ...
	I1229 07:56:48.969608  936640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/client.key: {Name:mke8254981272cfede15c770cbb36e916cc95492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:56:48.969704  936640 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.key.b587eace
	I1229 07:56:48.969720  936640 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.crt.b587eace with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1229 07:56:49.153271  936640 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.crt.b587eace ...
	I1229 07:56:49.153304  936640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.crt.b587eace: {Name:mkb61696bc7b2dc9d2751f1fb31b0fe7c31ed34b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:56:49.153499  936640 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.key.b587eace ...
	I1229 07:56:49.153513  936640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.key.b587eace: {Name:mk5344e77aecd94354787564c15900d95517aede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:56:49.153600  936640 certs.go:382] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.crt.b587eace -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.crt
	I1229 07:56:49.153679  936640 certs.go:386] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.key.b587eace -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.key
	I1229 07:56:49.153748  936640 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/proxy-client.key
	I1229 07:56:49.153767  936640 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/proxy-client.crt with IP's: []
	I1229 07:56:49.393398  936640 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/proxy-client.crt ...
	I1229 07:56:49.393432  936640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/proxy-client.crt: {Name:mkcd2941290709327af69e0a1462691d0b09197d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:56:49.393634  936640 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/proxy-client.key ...
	I1229 07:56:49.393651  936640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/proxy-client.key: {Name:mkbfb5c4880c1034581a9f54358309eee5bb4af7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:56:49.393871  936640 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:56:49.393923  936640 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:56:49.393937  936640 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:56:49.393967  936640 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:56:49.394002  936640 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:56:49.394030  936640 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:56:49.394079  936640 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:56:49.394663  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:56:49.415282  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:56:49.436229  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:56:49.467836  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:56:49.495988  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1229 07:56:49.517357  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:56:49.535378  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:56:49.554965  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/embed-certs-664524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:56:49.573169  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:56:49.591821  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:56:49.610557  936640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:56:49.628064  936640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:56:49.641101  936640 ssh_runner.go:195] Run: openssl version
	I1229 07:56:49.647545  936640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:56:49.655314  936640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:56:49.663455  936640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:56:49.667452  936640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:56:49.667519  936640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:56:49.710863  936640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:56:49.718461  936640 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/741854.pem /etc/ssl/certs/51391683.0
	I1229 07:56:49.726077  936640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:56:49.733669  936640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:56:49.741561  936640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:56:49.745343  936640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:56:49.745432  936640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:56:49.786192  936640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:56:49.793715  936640 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7418542.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:56:49.801083  936640 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:56:49.808554  936640 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:56:49.816031  936640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:56:49.819635  936640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:56:49.819702  936640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:56:49.860733  936640 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:56:49.868257  936640 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:56:49.875540  936640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:56:49.879221  936640 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:56:49.879273  936640 kubeadm.go:401] StartCluster: {Name:embed-certs-664524 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-664524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:56:49.879354  936640 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:56:49.879426  936640 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:56:49.906682  936640 cri.go:96] found id: ""
	I1229 07:56:49.906808  936640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:56:49.914841  936640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:56:49.922769  936640 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:56:49.922838  936640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:56:49.931506  936640 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:56:49.931582  936640 kubeadm.go:158] found existing configuration files:
	
	I1229 07:56:49.931656  936640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:56:49.941699  936640 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:56:49.941777  936640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:56:49.950011  936640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:56:49.960532  936640 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:56:49.960596  936640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:56:49.968232  936640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:56:49.976204  936640 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:56:49.976268  936640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:56:49.983821  936640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:56:49.992183  936640 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:56:49.992307  936640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:56:50.000151  936640 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:56:50.046010  936640 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:56:50.046220  936640 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:56:50.119720  936640 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:56:50.119798  936640 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:56:50.119842  936640 kubeadm.go:319] OS: Linux
	I1229 07:56:50.119893  936640 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:56:50.119945  936640 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:56:50.119995  936640 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:56:50.120055  936640 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:56:50.120114  936640 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:56:50.120173  936640 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:56:50.120248  936640 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:56:50.120303  936640 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:56:50.120369  936640 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:56:50.195320  936640 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:56:50.195437  936640 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:56:50.195533  936640 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:56:50.205214  936640 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:56:50.211591  936640 out.go:252]   - Generating certificates and keys ...
	I1229 07:56:50.211694  936640 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:56:50.211768  936640 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:56:50.573952  936640 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:56:50.648748  936640 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:56:51.056008  936640 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:56:51.445404  936640 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:56:51.559736  936640 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:56:51.560062  936640 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-664524 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:56:51.746537  936640 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:56:51.747022  936640 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-664524 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:56:51.896448  936640 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:56:52.257692  936640 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:56:52.550296  936640 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:56:52.550573  936640 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:56:52.625819  936640 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:56:52.883896  936640 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:56:53.104665  936640 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:56:53.288364  936640 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:56:53.664668  936640 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:56:53.665262  936640 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:56:53.668040  936640 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:56:53.672039  936640 out.go:252]   - Booting up control plane ...
	I1229 07:56:53.672275  936640 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:56:53.672396  936640 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:56:53.672486  936640 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:56:53.687466  936640 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:56:53.687978  936640 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:56:53.700595  936640 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:56:53.706063  936640 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:56:53.706124  936640 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:56:53.856644  936640 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:56:53.856772  936640 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:56:55.358202  936640 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50162452s
	I1229 07:56:55.362159  936640 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 07:56:55.363839  936640 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1229 07:56:55.364211  936640 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 07:56:55.365027  936640 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 07:56:56.376935  936640 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.011412914s
	I1229 07:56:58.475674  936640 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.10991415s
	I1229 07:57:00.369724  936640 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.004977693s
	I1229 07:57:00.417599  936640 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:57:00.436992  936640 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:57:00.469005  936640 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:57:00.469217  936640 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-664524 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:57:00.504826  936640 kubeadm.go:319] [bootstrap-token] Using token: 8x3w04.fxgzi05v14ve74uu
	I1229 07:57:00.508047  936640 out.go:252]   - Configuring RBAC rules ...
	I1229 07:57:00.508183  936640 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:57:00.518560  936640 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:57:00.538981  936640 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:57:00.543623  936640 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:57:00.548149  936640 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:57:00.553129  936640 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:57:00.777847  936640 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:57:01.218298  936640 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:57:01.786092  936640 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:57:01.787651  936640 kubeadm.go:319] 
	I1229 07:57:01.787725  936640 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:57:01.787762  936640 kubeadm.go:319] 
	I1229 07:57:01.787863  936640 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:57:01.787877  936640 kubeadm.go:319] 
	I1229 07:57:01.787906  936640 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:57:01.787992  936640 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:57:01.788054  936640 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:57:01.788068  936640 kubeadm.go:319] 
	I1229 07:57:01.788123  936640 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:57:01.788131  936640 kubeadm.go:319] 
	I1229 07:57:01.788179  936640 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:57:01.788187  936640 kubeadm.go:319] 
	I1229 07:57:01.788238  936640 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:57:01.788317  936640 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:57:01.788396  936640 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:57:01.788405  936640 kubeadm.go:319] 
	I1229 07:57:01.788490  936640 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:57:01.788572  936640 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:57:01.788581  936640 kubeadm.go:319] 
	I1229 07:57:01.788665  936640 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8x3w04.fxgzi05v14ve74uu \
	I1229 07:57:01.788771  936640 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 \
	I1229 07:57:01.788823  936640 kubeadm.go:319] 	--control-plane 
	I1229 07:57:01.788829  936640 kubeadm.go:319] 
	I1229 07:57:01.789036  936640 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:57:01.789052  936640 kubeadm.go:319] 
	I1229 07:57:01.789141  936640 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8x3w04.fxgzi05v14ve74uu \
	I1229 07:57:01.789255  936640 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 
	I1229 07:57:01.794244  936640 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:57:01.794663  936640 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:57:01.794774  936640 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:57:01.794794  936640 cni.go:84] Creating CNI manager for ""
	I1229 07:57:01.794803  936640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:57:01.799964  936640 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:57:01.802913  936640 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:57:01.807256  936640 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:57:01.807274  936640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:57:01.820710  936640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:57:02.188429  936640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:57:02.188555  936640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:57:02.188655  936640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-664524 minikube.k8s.io/updated_at=2025_12_29T07_57_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=embed-certs-664524 minikube.k8s.io/primary=true
	I1229 07:57:02.428625  936640 ops.go:34] apiserver oom_adj: -16
	I1229 07:57:02.428753  936640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:57:02.928941  936640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:57:03.429391  936640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:57:03.929591  936640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:57:04.429012  936640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:57:04.929024  936640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:57:05.429702  936640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:57:05.929798  936640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:57:06.030590  936640 kubeadm.go:1114] duration metric: took 3.842078464s to wait for elevateKubeSystemPrivileges
	I1229 07:57:06.030631  936640 kubeadm.go:403] duration metric: took 16.151362097s to StartCluster
	I1229 07:57:06.030655  936640 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:57:06.030736  936640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:57:06.031787  936640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:57:06.032035  936640 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:57:06.032168  936640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:57:06.032470  936640 config.go:182] Loaded profile config "embed-certs-664524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:57:06.032514  936640 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:57:06.032586  936640 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-664524"
	I1229 07:57:06.032604  936640 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-664524"
	I1229 07:57:06.032633  936640 host.go:66] Checking if "embed-certs-664524" exists ...
	I1229 07:57:06.033214  936640 addons.go:70] Setting default-storageclass=true in profile "embed-certs-664524"
	I1229 07:57:06.033248  936640 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-664524"
	I1229 07:57:06.033480  936640 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Status}}
	I1229 07:57:06.033612  936640 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Status}}
	I1229 07:57:06.035335  936640 out.go:179] * Verifying Kubernetes components...
	I1229 07:57:06.039320  936640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:57:06.085927  936640 addons.go:239] Setting addon default-storageclass=true in "embed-certs-664524"
	I1229 07:57:06.086026  936640 host.go:66] Checking if "embed-certs-664524" exists ...
	I1229 07:57:06.086611  936640 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Status}}
	I1229 07:57:06.087025  936640 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:57:06.089888  936640 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:57:06.089911  936640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:57:06.089976  936640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:06.125817  936640 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:57:06.125854  936640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:57:06.125921  936640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:57:06.150115  936640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33833 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:57:06.161217  936640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33833 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:57:06.372560  936640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:57:06.395042  936640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:57:06.470966  936640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:57:06.492086  936640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:57:06.990774  936640 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1229 07:57:06.991959  936640 node_ready.go:35] waiting up to 6m0s for node "embed-certs-664524" to be "Ready" ...
	I1229 07:57:07.482885  936640 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:57:07.485890  936640 addons.go:530] duration metric: took 1.453363866s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1229 07:57:07.497381  936640 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-664524" context rescaled to 1 replicas
	W1229 07:57:08.995707  936640 node_ready.go:57] node "embed-certs-664524" has "Ready":"False" status (will retry)
	W1229 07:57:11.495008  936640 node_ready.go:57] node "embed-certs-664524" has "Ready":"False" status (will retry)
	W1229 07:57:13.495129  936640 node_ready.go:57] node "embed-certs-664524" has "Ready":"False" status (will retry)
	W1229 07:57:15.495981  936640 node_ready.go:57] node "embed-certs-664524" has "Ready":"False" status (will retry)
	W1229 07:57:17.997227  936640 node_ready.go:57] node "embed-certs-664524" has "Ready":"False" status (will retry)
	I1229 07:57:19.496514  936640 node_ready.go:49] node "embed-certs-664524" is "Ready"
	I1229 07:57:19.496556  936640 node_ready.go:38] duration metric: took 12.504553032s for node "embed-certs-664524" to be "Ready" ...
	I1229 07:57:19.496571  936640 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:57:19.496631  936640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:57:19.511439  936640 api_server.go:72] duration metric: took 13.479356701s to wait for apiserver process to appear ...
	I1229 07:57:19.511466  936640 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:57:19.511486  936640 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:57:19.520605  936640 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1229 07:57:19.521782  936640 api_server.go:141] control plane version: v1.35.0
	I1229 07:57:19.521821  936640 api_server.go:131] duration metric: took 10.34693ms to wait for apiserver health ...
	I1229 07:57:19.521832  936640 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:57:19.524720  936640 system_pods.go:59] 8 kube-system pods found
	I1229 07:57:19.524755  936640 system_pods.go:61] "coredns-7d764666f9-pcpsf" [2a4096fa-c1ce-4b41-b821-9a1a1e27d301] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:57:19.524762  936640 system_pods.go:61] "etcd-embed-certs-664524" [cfe4eb21-b2cd-4115-9ffe-603e2cbf0c30] Running
	I1229 07:57:19.524768  936640 system_pods.go:61] "kindnet-c8jvc" [103ef0e3-8d0d-4bbc-a430-92b7c002114c] Running
	I1229 07:57:19.524773  936640 system_pods.go:61] "kube-apiserver-embed-certs-664524" [067d055c-671c-4e2c-a927-efc4fbb42836] Running
	I1229 07:57:19.524778  936640 system_pods.go:61] "kube-controller-manager-embed-certs-664524" [70842a82-e0c6-44df-93b6-46c73923df5a] Running
	I1229 07:57:19.524815  936640 system_pods.go:61] "kube-proxy-tdfv8" [69aab033-b5bf-44e8-96c0-20f616cd77a2] Running
	I1229 07:57:19.524820  936640 system_pods.go:61] "kube-scheduler-embed-certs-664524" [f9ed8fe2-ebf0-47ea-9b7e-a533fc97aedb] Running
	I1229 07:57:19.524826  936640 system_pods.go:61] "storage-provisioner" [c9bef27e-be02-4cf3-b384-c50517239a0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:57:19.524834  936640 system_pods.go:74] duration metric: took 2.99546ms to wait for pod list to return data ...
	I1229 07:57:19.524842  936640 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:57:19.527444  936640 default_sa.go:45] found service account: "default"
	I1229 07:57:19.527468  936640 default_sa.go:55] duration metric: took 2.620458ms for default service account to be created ...
	I1229 07:57:19.527477  936640 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:57:19.530299  936640 system_pods.go:86] 8 kube-system pods found
	I1229 07:57:19.530334  936640 system_pods.go:89] "coredns-7d764666f9-pcpsf" [2a4096fa-c1ce-4b41-b821-9a1a1e27d301] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:57:19.530342  936640 system_pods.go:89] "etcd-embed-certs-664524" [cfe4eb21-b2cd-4115-9ffe-603e2cbf0c30] Running
	I1229 07:57:19.530349  936640 system_pods.go:89] "kindnet-c8jvc" [103ef0e3-8d0d-4bbc-a430-92b7c002114c] Running
	I1229 07:57:19.530354  936640 system_pods.go:89] "kube-apiserver-embed-certs-664524" [067d055c-671c-4e2c-a927-efc4fbb42836] Running
	I1229 07:57:19.530360  936640 system_pods.go:89] "kube-controller-manager-embed-certs-664524" [70842a82-e0c6-44df-93b6-46c73923df5a] Running
	I1229 07:57:19.530364  936640 system_pods.go:89] "kube-proxy-tdfv8" [69aab033-b5bf-44e8-96c0-20f616cd77a2] Running
	I1229 07:57:19.530370  936640 system_pods.go:89] "kube-scheduler-embed-certs-664524" [f9ed8fe2-ebf0-47ea-9b7e-a533fc97aedb] Running
	I1229 07:57:19.530379  936640 system_pods.go:89] "storage-provisioner" [c9bef27e-be02-4cf3-b384-c50517239a0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:57:19.530405  936640 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1229 07:57:19.846759  936640 system_pods.go:86] 8 kube-system pods found
	I1229 07:57:19.846795  936640 system_pods.go:89] "coredns-7d764666f9-pcpsf" [2a4096fa-c1ce-4b41-b821-9a1a1e27d301] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:57:19.846803  936640 system_pods.go:89] "etcd-embed-certs-664524" [cfe4eb21-b2cd-4115-9ffe-603e2cbf0c30] Running
	I1229 07:57:19.846811  936640 system_pods.go:89] "kindnet-c8jvc" [103ef0e3-8d0d-4bbc-a430-92b7c002114c] Running
	I1229 07:57:19.846847  936640 system_pods.go:89] "kube-apiserver-embed-certs-664524" [067d055c-671c-4e2c-a927-efc4fbb42836] Running
	I1229 07:57:19.846862  936640 system_pods.go:89] "kube-controller-manager-embed-certs-664524" [70842a82-e0c6-44df-93b6-46c73923df5a] Running
	I1229 07:57:19.846867  936640 system_pods.go:89] "kube-proxy-tdfv8" [69aab033-b5bf-44e8-96c0-20f616cd77a2] Running
	I1229 07:57:19.846871  936640 system_pods.go:89] "kube-scheduler-embed-certs-664524" [f9ed8fe2-ebf0-47ea-9b7e-a533fc97aedb] Running
	I1229 07:57:19.846879  936640 system_pods.go:89] "storage-provisioner" [c9bef27e-be02-4cf3-b384-c50517239a0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:57:20.196113  936640 system_pods.go:86] 8 kube-system pods found
	I1229 07:57:20.196152  936640 system_pods.go:89] "coredns-7d764666f9-pcpsf" [2a4096fa-c1ce-4b41-b821-9a1a1e27d301] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:57:20.196160  936640 system_pods.go:89] "etcd-embed-certs-664524" [cfe4eb21-b2cd-4115-9ffe-603e2cbf0c30] Running
	I1229 07:57:20.196167  936640 system_pods.go:89] "kindnet-c8jvc" [103ef0e3-8d0d-4bbc-a430-92b7c002114c] Running
	I1229 07:57:20.196172  936640 system_pods.go:89] "kube-apiserver-embed-certs-664524" [067d055c-671c-4e2c-a927-efc4fbb42836] Running
	I1229 07:57:20.196177  936640 system_pods.go:89] "kube-controller-manager-embed-certs-664524" [70842a82-e0c6-44df-93b6-46c73923df5a] Running
	I1229 07:57:20.196182  936640 system_pods.go:89] "kube-proxy-tdfv8" [69aab033-b5bf-44e8-96c0-20f616cd77a2] Running
	I1229 07:57:20.196190  936640 system_pods.go:89] "kube-scheduler-embed-certs-664524" [f9ed8fe2-ebf0-47ea-9b7e-a533fc97aedb] Running
	I1229 07:57:20.196202  936640 system_pods.go:89] "storage-provisioner" [c9bef27e-be02-4cf3-b384-c50517239a0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1229 07:57:20.615490  936640 system_pods.go:86] 8 kube-system pods found
	I1229 07:57:20.615526  936640 system_pods.go:89] "coredns-7d764666f9-pcpsf" [2a4096fa-c1ce-4b41-b821-9a1a1e27d301] Running
	I1229 07:57:20.615534  936640 system_pods.go:89] "etcd-embed-certs-664524" [cfe4eb21-b2cd-4115-9ffe-603e2cbf0c30] Running
	I1229 07:57:20.615539  936640 system_pods.go:89] "kindnet-c8jvc" [103ef0e3-8d0d-4bbc-a430-92b7c002114c] Running
	I1229 07:57:20.615544  936640 system_pods.go:89] "kube-apiserver-embed-certs-664524" [067d055c-671c-4e2c-a927-efc4fbb42836] Running
	I1229 07:57:20.615564  936640 system_pods.go:89] "kube-controller-manager-embed-certs-664524" [70842a82-e0c6-44df-93b6-46c73923df5a] Running
	I1229 07:57:20.615573  936640 system_pods.go:89] "kube-proxy-tdfv8" [69aab033-b5bf-44e8-96c0-20f616cd77a2] Running
	I1229 07:57:20.615594  936640 system_pods.go:89] "kube-scheduler-embed-certs-664524" [f9ed8fe2-ebf0-47ea-9b7e-a533fc97aedb] Running
	I1229 07:57:20.615599  936640 system_pods.go:89] "storage-provisioner" [c9bef27e-be02-4cf3-b384-c50517239a0e] Running
	I1229 07:57:20.615607  936640 system_pods.go:126] duration metric: took 1.088123688s to wait for k8s-apps to be running ...
	I1229 07:57:20.615618  936640 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:57:20.615685  936640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:57:20.630685  936640 system_svc.go:56] duration metric: took 15.037392ms WaitForService to wait for kubelet
	I1229 07:57:20.630715  936640 kubeadm.go:587] duration metric: took 14.598637241s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:57:20.630735  936640 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:57:20.633760  936640 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:57:20.633792  936640 node_conditions.go:123] node cpu capacity is 2
	I1229 07:57:20.633806  936640 node_conditions.go:105] duration metric: took 3.065162ms to run NodePressure ...
	I1229 07:57:20.633819  936640 start.go:242] waiting for startup goroutines ...
	I1229 07:57:20.633827  936640 start.go:247] waiting for cluster config update ...
	I1229 07:57:20.633842  936640 start.go:256] writing updated cluster config ...
	I1229 07:57:20.634137  936640 ssh_runner.go:195] Run: rm -f paused
	I1229 07:57:20.637805  936640 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:57:20.642603  936640 pod_ready.go:83] waiting for pod "coredns-7d764666f9-pcpsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:57:20.647627  936640 pod_ready.go:94] pod "coredns-7d764666f9-pcpsf" is "Ready"
	I1229 07:57:20.647657  936640 pod_ready.go:86] duration metric: took 5.024652ms for pod "coredns-7d764666f9-pcpsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:57:20.650155  936640 pod_ready.go:83] waiting for pod "etcd-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:57:20.655167  936640 pod_ready.go:94] pod "etcd-embed-certs-664524" is "Ready"
	I1229 07:57:20.655199  936640 pod_ready.go:86] duration metric: took 5.016659ms for pod "etcd-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:57:20.658022  936640 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:57:20.662721  936640 pod_ready.go:94] pod "kube-apiserver-embed-certs-664524" is "Ready"
	I1229 07:57:20.662754  936640 pod_ready.go:86] duration metric: took 4.704469ms for pod "kube-apiserver-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:57:20.665692  936640 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:57:21.041961  936640 pod_ready.go:94] pod "kube-controller-manager-embed-certs-664524" is "Ready"
	I1229 07:57:21.042001  936640 pod_ready.go:86] duration metric: took 376.276205ms for pod "kube-controller-manager-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:57:21.242156  936640 pod_ready.go:83] waiting for pod "kube-proxy-tdfv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:57:21.642173  936640 pod_ready.go:94] pod "kube-proxy-tdfv8" is "Ready"
	I1229 07:57:21.642203  936640 pod_ready.go:86] duration metric: took 400.019581ms for pod "kube-proxy-tdfv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:57:21.842594  936640 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:57:22.242861  936640 pod_ready.go:94] pod "kube-scheduler-embed-certs-664524" is "Ready"
	I1229 07:57:22.242907  936640 pod_ready.go:86] duration metric: took 400.284151ms for pod "kube-scheduler-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:57:22.242928  936640 pod_ready.go:40] duration metric: took 1.605080232s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:57:22.308624  936640 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1229 07:57:22.311548  936640 out.go:203] 
	W1229 07:57:22.314375  936640 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1229 07:57:22.317171  936640 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:57:22.320996  936640 out.go:179] * Done! kubectl is now configured to use "embed-certs-664524" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 29 07:57:19 embed-certs-664524 crio[836]: time="2025-12-29T07:57:19.633027369Z" level=info msg="Created container a03edd73ba8bae034f272af587de042e6523c0695828318d6a0f91c02ab75254: kube-system/coredns-7d764666f9-pcpsf/coredns" id=424423ba-5333-4276-a53d-9912325f3f6a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:57:19 embed-certs-664524 crio[836]: time="2025-12-29T07:57:19.634128046Z" level=info msg="Starting container: a03edd73ba8bae034f272af587de042e6523c0695828318d6a0f91c02ab75254" id=cad895e3-440e-4151-80b1-84f119167f24 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:57:19 embed-certs-664524 crio[836]: time="2025-12-29T07:57:19.635840912Z" level=info msg="Started container" PID=1757 containerID=a03edd73ba8bae034f272af587de042e6523c0695828318d6a0f91c02ab75254 description=kube-system/coredns-7d764666f9-pcpsf/coredns id=cad895e3-440e-4151-80b1-84f119167f24 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90f57dada515410b9294cfc9f1859e732134e8142b0c4cee2ea478e93a4ff17f
	Dec 29 07:57:22 embed-certs-664524 crio[836]: time="2025-12-29T07:57:22.834489815Z" level=info msg="Running pod sandbox: default/busybox/POD" id=03a32220-47b1-4097-b8ef-0e3128632baf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:57:22 embed-certs-664524 crio[836]: time="2025-12-29T07:57:22.834564827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:57:22 embed-certs-664524 crio[836]: time="2025-12-29T07:57:22.839674509Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b9faaf104df2f02c248eb273b8128b3a76f32282aca8306f65cefcadb550f346 UID:d756b8ef-3530-493d-bf60-23520e77c33c NetNS:/var/run/netns/3509aa01-32ca-46b5-8617-07c239dd2678 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400064b350}] Aliases:map[]}"
	Dec 29 07:57:22 embed-certs-664524 crio[836]: time="2025-12-29T07:57:22.839838055Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 29 07:57:22 embed-certs-664524 crio[836]: time="2025-12-29T07:57:22.854232874Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b9faaf104df2f02c248eb273b8128b3a76f32282aca8306f65cefcadb550f346 UID:d756b8ef-3530-493d-bf60-23520e77c33c NetNS:/var/run/netns/3509aa01-32ca-46b5-8617-07c239dd2678 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400064b350}] Aliases:map[]}"
	Dec 29 07:57:22 embed-certs-664524 crio[836]: time="2025-12-29T07:57:22.854382069Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 29 07:57:22 embed-certs-664524 crio[836]: time="2025-12-29T07:57:22.858507046Z" level=info msg="Ran pod sandbox b9faaf104df2f02c248eb273b8128b3a76f32282aca8306f65cefcadb550f346 with infra container: default/busybox/POD" id=03a32220-47b1-4097-b8ef-0e3128632baf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:57:22 embed-certs-664524 crio[836]: time="2025-12-29T07:57:22.859698111Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b1d3c33c-c84f-4cf8-bf23-c984feb63648 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:57:22 embed-certs-664524 crio[836]: time="2025-12-29T07:57:22.859875073Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b1d3c33c-c84f-4cf8-bf23-c984feb63648 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:57:22 embed-certs-664524 crio[836]: time="2025-12-29T07:57:22.859963959Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b1d3c33c-c84f-4cf8-bf23-c984feb63648 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:57:22 embed-certs-664524 crio[836]: time="2025-12-29T07:57:22.861090392Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1ea1b12a-c990-4004-8adb-93a970bb3735 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:57:22 embed-certs-664524 crio[836]: time="2025-12-29T07:57:22.861425976Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 29 07:57:24 embed-certs-664524 crio[836]: time="2025-12-29T07:57:24.77348242Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=1ea1b12a-c990-4004-8adb-93a970bb3735 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:57:24 embed-certs-664524 crio[836]: time="2025-12-29T07:57:24.774157354Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1e38f186-58cd-43aa-a575-2ae48af9fff1 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:57:24 embed-certs-664524 crio[836]: time="2025-12-29T07:57:24.77645735Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=74f7c44d-5bcd-4bb6-b726-077d02411309 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:57:24 embed-certs-664524 crio[836]: time="2025-12-29T07:57:24.782469463Z" level=info msg="Creating container: default/busybox/busybox" id=01e013d0-3508-4f57-bb5c-64b310f34075 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:57:24 embed-certs-664524 crio[836]: time="2025-12-29T07:57:24.782606309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:57:24 embed-certs-664524 crio[836]: time="2025-12-29T07:57:24.787438122Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:57:24 embed-certs-664524 crio[836]: time="2025-12-29T07:57:24.787905285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:57:24 embed-certs-664524 crio[836]: time="2025-12-29T07:57:24.802860141Z" level=info msg="Created container 69fb5f0e4bf06d46c109151dee45418afc477374f66a0c4c6c24382cabff20ff: default/busybox/busybox" id=01e013d0-3508-4f57-bb5c-64b310f34075 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:57:24 embed-certs-664524 crio[836]: time="2025-12-29T07:57:24.803851426Z" level=info msg="Starting container: 69fb5f0e4bf06d46c109151dee45418afc477374f66a0c4c6c24382cabff20ff" id=26be7207-666c-4918-9640-b71f5d0aa86d name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:57:24 embed-certs-664524 crio[836]: time="2025-12-29T07:57:24.805944061Z" level=info msg="Started container" PID=1817 containerID=69fb5f0e4bf06d46c109151dee45418afc477374f66a0c4c6c24382cabff20ff description=default/busybox/busybox id=26be7207-666c-4918-9640-b71f5d0aa86d name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9faaf104df2f02c248eb273b8128b3a76f32282aca8306f65cefcadb550f346
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	69fb5f0e4bf06       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   b9faaf104df2f       busybox                                      default
	a03edd73ba8ba       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      12 seconds ago      Running             coredns                   0                   90f57dada5154       coredns-7d764666f9-pcpsf                     kube-system
	5aea96f7790d9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   5fc76a1778b24       storage-provisioner                          kube-system
	b4da9ca150305       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    23 seconds ago      Running             kindnet-cni               0                   2919e50701a29       kindnet-c8jvc                                kube-system
	780d6ee63f47d       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      25 seconds ago      Running             kube-proxy                0                   6ce4b669e9180       kube-proxy-tdfv8                             kube-system
	cd599c3b8c744       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      36 seconds ago      Running             kube-scheduler            0                   4605310d06c39       kube-scheduler-embed-certs-664524            kube-system
	2873036c20118       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      36 seconds ago      Running             kube-controller-manager   0                   76ed0ee8fd75a       kube-controller-manager-embed-certs-664524   kube-system
	7305a2dd6be07       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      36 seconds ago      Running             etcd                      0                   0af84d55fa0bd       etcd-embed-certs-664524                      kube-system
	20f3805c666ac       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      36 seconds ago      Running             kube-apiserver            0                   9e34efc0b7f23       kube-apiserver-embed-certs-664524            kube-system
	
	
	==> coredns [a03edd73ba8bae034f272af587de042e6523c0695828318d6a0f91c02ab75254] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43373 - 32088 "HINFO IN 4907161846026493171.1378987225609545007. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019801069s
	
	
	==> describe nodes <==
	Name:               embed-certs-664524
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-664524
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=embed-certs-664524
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_57_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:56:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-664524
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:57:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:57:32 +0000   Mon, 29 Dec 2025 07:56:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:57:32 +0000   Mon, 29 Dec 2025 07:56:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:57:32 +0000   Mon, 29 Dec 2025 07:56:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:57:32 +0000   Mon, 29 Dec 2025 07:57:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-664524
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                aa87aa83-06f4-4a87-8b15-861c15da3cbf
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-pcpsf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     26s
	  kube-system                 etcd-embed-certs-664524                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-c8jvc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-664524             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-664524    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-tdfv8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-664524             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node embed-certs-664524 event: Registered Node embed-certs-664524 in Controller
	
	
	==> dmesg <==
	[Dec29 07:29] overlayfs: idmapped layers are currently not supported
	[Dec29 07:30] overlayfs: idmapped layers are currently not supported
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	[Dec29 07:54] overlayfs: idmapped layers are currently not supported
	[Dec29 07:55] overlayfs: idmapped layers are currently not supported
	[Dec29 07:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7305a2dd6be0762de1160379719f6a1ca6d605742c6faf240ed50e06290f94ad] <==
	{"level":"info","ts":"2025-12-29T07:56:55.602051Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:56:56.530137Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-29T07:56:56.530187Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-29T07:56:56.530233Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-29T07:56:56.530245Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:56:56.530262Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:56:56.531352Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:56:56.531410Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:56:56.531464Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-29T07:56:56.531474Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:56:56.532652Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:56:56.533771Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-664524 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:56:56.533801Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:56:56.534036Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:56:56.534563Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:56:56.534671Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:56:56.534736Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:56:56.534797Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T07:56:56.534889Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-29T07:56:56.534983Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:56:56.535544Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:56:56.535652Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:56:56.535688Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:56:56.537192Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:56:56.538023Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 07:57:32 up  4:40,  0 user,  load average: 1.40, 1.73, 2.18
	Linux embed-certs-664524 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b4da9ca150305ed7fa85cbc6e3d153a4b1b82192419f99dcf9a4ed7fcce3a46c] <==
	I1229 07:57:08.857978       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:57:08.953461       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:57:08.953618       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:57:08.953630       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:57:08.953645       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:57:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:57:09.153903       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:57:09.153935       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:57:09.153944       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:57:09.154246       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:57:09.354788       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:57:09.354814       1 metrics.go:72] Registering metrics
	I1229 07:57:09.354866       1 controller.go:711] "Syncing nftables rules"
	I1229 07:57:19.153828       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:57:19.153877       1 main.go:301] handling current node
	I1229 07:57:29.154319       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:57:29.154352       1 main.go:301] handling current node
	
	
	==> kube-apiserver [20f3805c666ac898a03d9bdeb11ea0bae90400bccd4e0887fa546801df755d7f] <==
	E1229 07:56:58.534496       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1229 07:56:58.540945       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:56:58.556361       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:56:58.558006       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1229 07:56:58.574538       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:56:58.576108       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1229 07:56:58.699802       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:56:59.145164       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1229 07:56:59.152145       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1229 07:56:59.152171       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:56:59.851716       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:56:59.906452       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:57:00.066027       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1229 07:57:00.082360       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1229 07:57:00.083855       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:57:00.091831       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:57:00.421041       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:57:01.186013       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:57:01.215714       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1229 07:57:01.232946       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1229 07:57:06.102466       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:57:06.114917       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:57:06.259239       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:57:06.361342       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1229 07:57:30.674137       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:52752: use of closed network connection
	
	
	==> kube-controller-manager [2873036c20118238d706e525fac04144d9a2907da71ae0df64321389ce903468] <==
	I1229 07:57:05.224524       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:05.224561       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:05.224581       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:05.227294       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:05.227835       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:05.228133       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1229 07:57:05.228193       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-664524"
	I1229 07:57:05.228237       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1229 07:57:05.228283       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:05.228325       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:05.228825       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:05.228882       1 range_allocator.go:177] "Sending events to api server"
	I1229 07:57:05.228913       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1229 07:57:05.228923       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:57:05.228929       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:05.230897       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:05.230946       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:05.239138       1 range_allocator.go:433] "Set node PodCIDR" node="embed-certs-664524" podCIDRs=["10.244.0.0/24"]
	I1229 07:57:05.242187       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:57:05.251172       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:05.322895       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:05.322921       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:57:05.322928       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:57:05.342727       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:20.229933       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [780d6ee63f47d94443e1e9d19f1732dc32c386d1a2867a05dfee0a8262cbb8a7] <==
	I1229 07:57:07.121311       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:57:07.374492       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:57:07.480494       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:07.480528       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1229 07:57:07.480596       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:57:07.515605       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:57:07.515747       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:57:07.520405       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:57:07.520707       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:57:07.520719       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:57:07.522599       1 config.go:200] "Starting service config controller"
	I1229 07:57:07.522610       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:57:07.522626       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:57:07.522632       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:57:07.522642       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:57:07.522646       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:57:07.529776       1 config.go:309] "Starting node config controller"
	I1229 07:57:07.529799       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:57:07.529807       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:57:07.622887       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:57:07.622923       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:57:07.622962       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cd599c3b8c7444661a6e0a73e3d297210388cd11ab90b1db51a017fd985beeef] <==
	E1229 07:56:58.470027       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:56:58.470384       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:56:58.470929       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1229 07:56:58.471037       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:56:58.475515       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:56:58.476267       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:56:58.476310       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:56:58.476346       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:56:58.476381       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:56:58.477257       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:56:58.477301       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:56:58.477376       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:56:58.477674       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:56:59.285081       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:56:59.329323       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:56:59.386563       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:56:59.419359       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:56:59.438289       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:56:59.462681       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:56:59.463838       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:56:59.486837       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:56:59.540187       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:56:59.542493       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:56:59.829343       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1229 07:57:02.856945       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:57:06 embed-certs-664524 kubelet[1284]: I1229 07:57:06.582252    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/103ef0e3-8d0d-4bbc-a430-92b7c002114c-xtables-lock\") pod \"kindnet-c8jvc\" (UID: \"103ef0e3-8d0d-4bbc-a430-92b7c002114c\") " pod="kube-system/kindnet-c8jvc"
	Dec 29 07:57:06 embed-certs-664524 kubelet[1284]: I1229 07:57:06.582269    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/103ef0e3-8d0d-4bbc-a430-92b7c002114c-lib-modules\") pod \"kindnet-c8jvc\" (UID: \"103ef0e3-8d0d-4bbc-a430-92b7c002114c\") " pod="kube-system/kindnet-c8jvc"
	Dec 29 07:57:06 embed-certs-664524 kubelet[1284]: I1229 07:57:06.582289    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhdx9\" (UniqueName: \"kubernetes.io/projected/69aab033-b5bf-44e8-96c0-20f616cd77a2-kube-api-access-nhdx9\") pod \"kube-proxy-tdfv8\" (UID: \"69aab033-b5bf-44e8-96c0-20f616cd77a2\") " pod="kube-system/kube-proxy-tdfv8"
	Dec 29 07:57:06 embed-certs-664524 kubelet[1284]: I1229 07:57:06.582312    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zkcw\" (UniqueName: \"kubernetes.io/projected/103ef0e3-8d0d-4bbc-a430-92b7c002114c-kube-api-access-7zkcw\") pod \"kindnet-c8jvc\" (UID: \"103ef0e3-8d0d-4bbc-a430-92b7c002114c\") " pod="kube-system/kindnet-c8jvc"
	Dec 29 07:57:06 embed-certs-664524 kubelet[1284]: I1229 07:57:06.708141    1284 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 29 07:57:07 embed-certs-664524 kubelet[1284]: I1229 07:57:07.341777    1284 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-tdfv8" podStartSLOduration=1.341758596 podStartE2EDuration="1.341758596s" podCreationTimestamp="2025-12-29 07:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:57:07.338620538 +0000 UTC m=+6.325695709" watchObservedRunningTime="2025-12-29 07:57:07.341758596 +0000 UTC m=+6.328833767"
	Dec 29 07:57:12 embed-certs-664524 kubelet[1284]: E1229 07:57:12.985551    1284 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-664524" containerName="kube-scheduler"
	Dec 29 07:57:12 embed-certs-664524 kubelet[1284]: I1229 07:57:12.999884    1284 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-c8jvc" podStartSLOduration=5.1088268039999996 podStartE2EDuration="6.999859608s" podCreationTimestamp="2025-12-29 07:57:06 +0000 UTC" firstStartedPulling="2025-12-29 07:57:06.887345269 +0000 UTC m=+5.874420440" lastFinishedPulling="2025-12-29 07:57:08.778378081 +0000 UTC m=+7.765453244" observedRunningTime="2025-12-29 07:57:09.317735172 +0000 UTC m=+8.304810343" watchObservedRunningTime="2025-12-29 07:57:12.999859608 +0000 UTC m=+11.986934771"
	Dec 29 07:57:13 embed-certs-664524 kubelet[1284]: E1229 07:57:13.310418    1284 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-664524" containerName="kube-scheduler"
	Dec 29 07:57:13 embed-certs-664524 kubelet[1284]: E1229 07:57:13.707646    1284 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-664524" containerName="kube-controller-manager"
	Dec 29 07:57:13 embed-certs-664524 kubelet[1284]: E1229 07:57:13.783522    1284 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-664524" containerName="kube-apiserver"
	Dec 29 07:57:14 embed-certs-664524 kubelet[1284]: E1229 07:57:14.496417    1284 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-664524" containerName="etcd"
	Dec 29 07:57:19 embed-certs-664524 kubelet[1284]: I1229 07:57:19.197225    1284 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 29 07:57:19 embed-certs-664524 kubelet[1284]: I1229 07:57:19.292550    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhxcd\" (UniqueName: \"kubernetes.io/projected/2a4096fa-c1ce-4b41-b821-9a1a1e27d301-kube-api-access-mhxcd\") pod \"coredns-7d764666f9-pcpsf\" (UID: \"2a4096fa-c1ce-4b41-b821-9a1a1e27d301\") " pod="kube-system/coredns-7d764666f9-pcpsf"
	Dec 29 07:57:19 embed-certs-664524 kubelet[1284]: I1229 07:57:19.292621    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a4096fa-c1ce-4b41-b821-9a1a1e27d301-config-volume\") pod \"coredns-7d764666f9-pcpsf\" (UID: \"2a4096fa-c1ce-4b41-b821-9a1a1e27d301\") " pod="kube-system/coredns-7d764666f9-pcpsf"
	Dec 29 07:57:19 embed-certs-664524 kubelet[1284]: I1229 07:57:19.393816    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2txx6\" (UniqueName: \"kubernetes.io/projected/c9bef27e-be02-4cf3-b384-c50517239a0e-kube-api-access-2txx6\") pod \"storage-provisioner\" (UID: \"c9bef27e-be02-4cf3-b384-c50517239a0e\") " pod="kube-system/storage-provisioner"
	Dec 29 07:57:19 embed-certs-664524 kubelet[1284]: I1229 07:57:19.394113    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c9bef27e-be02-4cf3-b384-c50517239a0e-tmp\") pod \"storage-provisioner\" (UID: \"c9bef27e-be02-4cf3-b384-c50517239a0e\") " pod="kube-system/storage-provisioner"
	Dec 29 07:57:19 embed-certs-664524 kubelet[1284]: W1229 07:57:19.566939    1284 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/crio-5fc76a1778b24f2e531b24df6fdf2021af2ecce15efad826010a7f01bea07a0e WatchSource:0}: Error finding container 5fc76a1778b24f2e531b24df6fdf2021af2ecce15efad826010a7f01bea07a0e: Status 404 returned error can't find the container with id 5fc76a1778b24f2e531b24df6fdf2021af2ecce15efad826010a7f01bea07a0e
	Dec 29 07:57:20 embed-certs-664524 kubelet[1284]: E1229 07:57:20.326563    1284 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-pcpsf" containerName="coredns"
	Dec 29 07:57:20 embed-certs-664524 kubelet[1284]: I1229 07:57:20.353713    1284 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.353698358 podStartE2EDuration="13.353698358s" podCreationTimestamp="2025-12-29 07:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:57:20.340220411 +0000 UTC m=+19.327295599" watchObservedRunningTime="2025-12-29 07:57:20.353698358 +0000 UTC m=+19.340773529"
	Dec 29 07:57:21 embed-certs-664524 kubelet[1284]: E1229 07:57:21.328001    1284 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-pcpsf" containerName="coredns"
	Dec 29 07:57:22 embed-certs-664524 kubelet[1284]: E1229 07:57:22.329737    1284 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-pcpsf" containerName="coredns"
	Dec 29 07:57:22 embed-certs-664524 kubelet[1284]: I1229 07:57:22.525216    1284 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-pcpsf" podStartSLOduration=16.525198769 podStartE2EDuration="16.525198769s" podCreationTimestamp="2025-12-29 07:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:57:20.354855963 +0000 UTC m=+19.341931134" watchObservedRunningTime="2025-12-29 07:57:22.525198769 +0000 UTC m=+21.512273932"
	Dec 29 07:57:22 embed-certs-664524 kubelet[1284]: I1229 07:57:22.621154    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vbtj\" (UniqueName: \"kubernetes.io/projected/d756b8ef-3530-493d-bf60-23520e77c33c-kube-api-access-9vbtj\") pod \"busybox\" (UID: \"d756b8ef-3530-493d-bf60-23520e77c33c\") " pod="default/busybox"
	Dec 29 07:57:22 embed-certs-664524 kubelet[1284]: W1229 07:57:22.856346    1284 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/crio-b9faaf104df2f02c248eb273b8128b3a76f32282aca8306f65cefcadb550f346 WatchSource:0}: Error finding container b9faaf104df2f02c248eb273b8128b3a76f32282aca8306f65cefcadb550f346: Status 404 returned error can't find the container with id b9faaf104df2f02c248eb273b8128b3a76f32282aca8306f65cefcadb550f346
	
	
	==> storage-provisioner [5aea96f7790d9150e46d85e4aa4bdc90c54002870b5911dbf26a74a3a7e8a257] <==
	I1229 07:57:19.632359       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:57:19.668498       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:57:19.668644       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:57:19.673447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:57:19.695226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:57:19.695489       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:57:19.697047       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-664524_3f6c59dd-bc45-4f52-bdfd-3611e19b3b27!
	W1229 07:57:19.698123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:57:19.698607       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3779bf10-bb43-4dfc-bba7-8f9e72c926a6", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-664524_3f6c59dd-bc45-4f52-bdfd-3611e19b3b27 became leader
	W1229 07:57:19.706376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:57:19.797665       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-664524_3f6c59dd-bc45-4f52-bdfd-3611e19b3b27!
	W1229 07:57:21.709444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:57:21.713888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:57:23.716849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:57:23.726886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:57:25.730387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:57:25.734918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:57:27.738621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:57:27.745366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:57:29.748928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:57:29.753534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:57:31.756515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:57:31.764073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-664524 -n embed-certs-664524
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-664524 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-664524 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-664524 --alsologtostderr -v=1: exit status 80 (1.874985728s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-664524 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:58:45.522345  945996 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:58:45.522561  945996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:58:45.522588  945996 out.go:374] Setting ErrFile to fd 2...
	I1229 07:58:45.522615  945996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:58:45.523049  945996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:58:45.523505  945996 out.go:368] Setting JSON to false
	I1229 07:58:45.523571  945996 mustload.go:66] Loading cluster: embed-certs-664524
	I1229 07:58:45.524391  945996 config.go:182] Loaded profile config "embed-certs-664524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:58:45.526024  945996 cli_runner.go:164] Run: docker container inspect embed-certs-664524 --format={{.State.Status}}
	I1229 07:58:45.544013  945996 host.go:66] Checking if "embed-certs-664524" exists ...
	I1229 07:58:45.544335  945996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:58:45.610277  945996 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-29 07:58:45.597731253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:58:45.610927  945996 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766979747-22353/minikube-v1.37.0-1766979747-22353-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766979747-22353-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-664524 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1229 07:58:45.614965  945996 out.go:179] * Pausing node embed-certs-664524 ... 
	I1229 07:58:45.619137  945996 host.go:66] Checking if "embed-certs-664524" exists ...
	I1229 07:58:45.619580  945996 ssh_runner.go:195] Run: systemctl --version
	I1229 07:58:45.619648  945996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-664524
	I1229 07:58:45.636874  945996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33838 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/embed-certs-664524/id_rsa Username:docker}
	I1229 07:58:45.744699  945996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:58:45.760598  945996 pause.go:52] kubelet running: true
	I1229 07:58:45.760665  945996 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:58:46.011696  945996 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:58:46.011808  945996 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:58:46.091872  945996 cri.go:96] found id: "a1e706703f3776fe261699567df5bdac64ac355c0c33633f5f097336d36f805c"
	I1229 07:58:46.091892  945996 cri.go:96] found id: "dc3bc9b218eefd1dd5a0fb53fa0c8e4db9142d53881da74343b285d0d05b6758"
	I1229 07:58:46.091897  945996 cri.go:96] found id: "7539540a5a2b81108d156800d836cce77a98266438d5637aa05f90fbec56968f"
	I1229 07:58:46.091901  945996 cri.go:96] found id: "482c99472f92c834414b043b80388d6641f45c36fa36354051c42a85cd7b4c40"
	I1229 07:58:46.091904  945996 cri.go:96] found id: "b01da39edeea753e83c2d146e6837a58dff702fec6e5bdb574134e1badac68e8"
	I1229 07:58:46.091908  945996 cri.go:96] found id: "654e52c22fa70936846b530c44f221ec218a752ca1cc7140e9422de0ec6628d3"
	I1229 07:58:46.091911  945996 cri.go:96] found id: "d33e8a566872e68896c0f3f3ed489834fbd96480e02e43b56550ca6df569cf2e"
	I1229 07:58:46.091914  945996 cri.go:96] found id: "f4bd112a2b0e2eb9aca6d9b1d6032fd1d77abfb043c5af20060885edb1a04b53"
	I1229 07:58:46.091917  945996 cri.go:96] found id: "7de23a46010383a4bcead63ecc65f9f99043b4ec3bb3d680e0839a2c53de7654"
	I1229 07:58:46.091923  945996 cri.go:96] found id: "7c126b6a48dd7c6f938f3cc35158fa83091d03729f40aa2d7f66e9d1798a436a"
	I1229 07:58:46.091926  945996 cri.go:96] found id: "924f3c959e2ad4291d69a58bd2fdba2b25cbc103d702c8ba5175b57b4d74da63"
	I1229 07:58:46.091930  945996 cri.go:96] found id: ""
	I1229 07:58:46.091986  945996 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:58:46.116244  945996 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:58:46Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:58:46.355526  945996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:58:46.376597  945996 pause.go:52] kubelet running: false
	I1229 07:58:46.376698  945996 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:58:46.548260  945996 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:58:46.548414  945996 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:58:46.638468  945996 cri.go:96] found id: "a1e706703f3776fe261699567df5bdac64ac355c0c33633f5f097336d36f805c"
	I1229 07:58:46.638494  945996 cri.go:96] found id: "dc3bc9b218eefd1dd5a0fb53fa0c8e4db9142d53881da74343b285d0d05b6758"
	I1229 07:58:46.638499  945996 cri.go:96] found id: "7539540a5a2b81108d156800d836cce77a98266438d5637aa05f90fbec56968f"
	I1229 07:58:46.638502  945996 cri.go:96] found id: "482c99472f92c834414b043b80388d6641f45c36fa36354051c42a85cd7b4c40"
	I1229 07:58:46.638505  945996 cri.go:96] found id: "b01da39edeea753e83c2d146e6837a58dff702fec6e5bdb574134e1badac68e8"
	I1229 07:58:46.638509  945996 cri.go:96] found id: "654e52c22fa70936846b530c44f221ec218a752ca1cc7140e9422de0ec6628d3"
	I1229 07:58:46.638512  945996 cri.go:96] found id: "d33e8a566872e68896c0f3f3ed489834fbd96480e02e43b56550ca6df569cf2e"
	I1229 07:58:46.638515  945996 cri.go:96] found id: "f4bd112a2b0e2eb9aca6d9b1d6032fd1d77abfb043c5af20060885edb1a04b53"
	I1229 07:58:46.638518  945996 cri.go:96] found id: "7de23a46010383a4bcead63ecc65f9f99043b4ec3bb3d680e0839a2c53de7654"
	I1229 07:58:46.638526  945996 cri.go:96] found id: "7c126b6a48dd7c6f938f3cc35158fa83091d03729f40aa2d7f66e9d1798a436a"
	I1229 07:58:46.638530  945996 cri.go:96] found id: "924f3c959e2ad4291d69a58bd2fdba2b25cbc103d702c8ba5175b57b4d74da63"
	I1229 07:58:46.638533  945996 cri.go:96] found id: ""
	I1229 07:58:46.638596  945996 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:58:47.037474  945996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:58:47.051027  945996 pause.go:52] kubelet running: false
	I1229 07:58:47.051149  945996 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:58:47.237876  945996 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:58:47.238024  945996 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:58:47.318400  945996 cri.go:96] found id: "a1e706703f3776fe261699567df5bdac64ac355c0c33633f5f097336d36f805c"
	I1229 07:58:47.318470  945996 cri.go:96] found id: "dc3bc9b218eefd1dd5a0fb53fa0c8e4db9142d53881da74343b285d0d05b6758"
	I1229 07:58:47.318491  945996 cri.go:96] found id: "7539540a5a2b81108d156800d836cce77a98266438d5637aa05f90fbec56968f"
	I1229 07:58:47.318513  945996 cri.go:96] found id: "482c99472f92c834414b043b80388d6641f45c36fa36354051c42a85cd7b4c40"
	I1229 07:58:47.318577  945996 cri.go:96] found id: "b01da39edeea753e83c2d146e6837a58dff702fec6e5bdb574134e1badac68e8"
	I1229 07:58:47.318600  945996 cri.go:96] found id: "654e52c22fa70936846b530c44f221ec218a752ca1cc7140e9422de0ec6628d3"
	I1229 07:58:47.318618  945996 cri.go:96] found id: "d33e8a566872e68896c0f3f3ed489834fbd96480e02e43b56550ca6df569cf2e"
	I1229 07:58:47.318637  945996 cri.go:96] found id: "f4bd112a2b0e2eb9aca6d9b1d6032fd1d77abfb043c5af20060885edb1a04b53"
	I1229 07:58:47.318657  945996 cri.go:96] found id: "7de23a46010383a4bcead63ecc65f9f99043b4ec3bb3d680e0839a2c53de7654"
	I1229 07:58:47.318694  945996 cri.go:96] found id: "7c126b6a48dd7c6f938f3cc35158fa83091d03729f40aa2d7f66e9d1798a436a"
	I1229 07:58:47.318720  945996 cri.go:96] found id: "924f3c959e2ad4291d69a58bd2fdba2b25cbc103d702c8ba5175b57b4d74da63"
	I1229 07:58:47.318741  945996 cri.go:96] found id: ""
	I1229 07:58:47.318830  945996 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:58:47.335488  945996 out.go:203] 
	W1229 07:58:47.338351  945996 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:58:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:58:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:58:47.338449  945996 out.go:285] * 
	* 
	W1229 07:58:47.342771  945996 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:58:47.346914  945996 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-664524 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-664524
helpers_test.go:244: (dbg) docker inspect embed-certs-664524:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1",
	        "Created": "2025-12-29T07:56:41.560040966Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 940596,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:57:45.731430628Z",
	            "FinishedAt": "2025-12-29T07:57:44.804058002Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/hosts",
	        "LogPath": "/var/lib/docker/containers/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1-json.log",
	        "Name": "/embed-certs-664524",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-664524:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-664524",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1",
	                "LowerDir": "/var/lib/docker/overlay2/5df02a147b0320621df1a2e8182e4dde46954dd5e1dd1a32299637c80ec0f91b-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5df02a147b0320621df1a2e8182e4dde46954dd5e1dd1a32299637c80ec0f91b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5df02a147b0320621df1a2e8182e4dde46954dd5e1dd1a32299637c80ec0f91b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5df02a147b0320621df1a2e8182e4dde46954dd5e1dd1a32299637c80ec0f91b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-664524",
	                "Source": "/var/lib/docker/volumes/embed-certs-664524/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-664524",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-664524",
	                "name.minikube.sigs.k8s.io": "embed-certs-664524",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e60b93a129bb746ce17b88e73bb23ac02439bb2f3ba87312a0ca82675f108f70",
	            "SandboxKey": "/var/run/docker/netns/e60b93a129bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33838"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33839"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33842"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33840"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33841"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-664524": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:b9:68:b3:75:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b2572828bf01c704dfae705c3731b5490ef1a8c7071b9170453d61e871f33f10",
	                    "EndpointID": "d9178428ffb2c37ad5560ab713f826cdd19868eb552dc23c9916036388fbe329",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-664524",
	                        "ece150c04980"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-664524 -n embed-certs-664524
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-664524 -n embed-certs-664524: exit status 2 (461.618657ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-664524 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-664524 logs -n 25: (1.486864342s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867       │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ image   │ old-k8s-version-512867 image list --format=json                                                                                                                                                                                               │ old-k8s-version-512867       │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ pause   │ -p old-k8s-version-512867 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-512867       │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │                     │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867       │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867       │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-822299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │                     │
	│ stop    │ -p no-preload-822299 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable dashboard -p no-preload-822299 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:56 UTC │
	│ image   │ no-preload-822299 image list --format=json                                                                                                                                                                                                    │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ pause   │ -p no-preload-822299 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │                     │
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-664524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │                     │
	│ stop    │ -p embed-certs-664524 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-664524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:58 UTC │
	│ ssh     │ force-systemd-flag-122604 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-122604    │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ delete  │ -p force-systemd-flag-122604                                                                                                                                                                                                                  │ force-systemd-flag-122604    │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ delete  │ -p disable-driver-mounts-062900                                                                                                                                                                                                               │ disable-driver-mounts-062900 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ start   │ -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ image   │ embed-certs-664524 image list --format=json                                                                                                                                                                                                   │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ pause   │ -p embed-certs-664524 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:58:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:58:00.431236  943004 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:58:00.431490  943004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:58:00.431522  943004 out.go:374] Setting ErrFile to fd 2...
	I1229 07:58:00.431544  943004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:58:00.431901  943004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:58:00.432501  943004 out.go:368] Setting JSON to false
	I1229 07:58:00.437873  943004 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16832,"bootTime":1766978248,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:58:00.438010  943004 start.go:143] virtualization:  
	I1229 07:58:00.272372  940463 addons.go:530] duration metric: took 6.892514577s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1229 07:58:00.289826  940463 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1229 07:58:00.293126  940463 api_server.go:141] control plane version: v1.35.0
	I1229 07:58:00.293159  940463 api_server.go:131] duration metric: took 34.57797ms to wait for apiserver health ...
	I1229 07:58:00.293169  940463 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:58:00.304329  940463 system_pods.go:59] 8 kube-system pods found
	I1229 07:58:00.304380  940463 system_pods.go:61] "coredns-7d764666f9-pcpsf" [2a4096fa-c1ce-4b41-b821-9a1a1e27d301] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:58:00.304393  940463 system_pods.go:61] "etcd-embed-certs-664524" [cfe4eb21-b2cd-4115-9ffe-603e2cbf0c30] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:58:00.304402  940463 system_pods.go:61] "kindnet-c8jvc" [103ef0e3-8d0d-4bbc-a430-92b7c002114c] Running
	I1229 07:58:00.304410  940463 system_pods.go:61] "kube-apiserver-embed-certs-664524" [067d055c-671c-4e2c-a927-efc4fbb42836] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:58:00.304420  940463 system_pods.go:61] "kube-controller-manager-embed-certs-664524" [70842a82-e0c6-44df-93b6-46c73923df5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:58:00.304425  940463 system_pods.go:61] "kube-proxy-tdfv8" [69aab033-b5bf-44e8-96c0-20f616cd77a2] Running
	I1229 07:58:00.304433  940463 system_pods.go:61] "kube-scheduler-embed-certs-664524" [f9ed8fe2-ebf0-47ea-9b7e-a533fc97aedb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:58:00.304438  940463 system_pods.go:61] "storage-provisioner" [c9bef27e-be02-4cf3-b384-c50517239a0e] Running
	I1229 07:58:00.304445  940463 system_pods.go:74] duration metric: took 11.268005ms to wait for pod list to return data ...
	I1229 07:58:00.304454  940463 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:58:00.313978  940463 default_sa.go:45] found service account: "default"
	I1229 07:58:00.314008  940463 default_sa.go:55] duration metric: took 9.547066ms for default service account to be created ...
	I1229 07:58:00.314019  940463 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:58:00.331486  940463 system_pods.go:86] 8 kube-system pods found
	I1229 07:58:00.331527  940463 system_pods.go:89] "coredns-7d764666f9-pcpsf" [2a4096fa-c1ce-4b41-b821-9a1a1e27d301] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:58:00.331541  940463 system_pods.go:89] "etcd-embed-certs-664524" [cfe4eb21-b2cd-4115-9ffe-603e2cbf0c30] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:58:00.331548  940463 system_pods.go:89] "kindnet-c8jvc" [103ef0e3-8d0d-4bbc-a430-92b7c002114c] Running
	I1229 07:58:00.331556  940463 system_pods.go:89] "kube-apiserver-embed-certs-664524" [067d055c-671c-4e2c-a927-efc4fbb42836] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:58:00.331563  940463 system_pods.go:89] "kube-controller-manager-embed-certs-664524" [70842a82-e0c6-44df-93b6-46c73923df5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:58:00.331569  940463 system_pods.go:89] "kube-proxy-tdfv8" [69aab033-b5bf-44e8-96c0-20f616cd77a2] Running
	I1229 07:58:00.331576  940463 system_pods.go:89] "kube-scheduler-embed-certs-664524" [f9ed8fe2-ebf0-47ea-9b7e-a533fc97aedb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:58:00.331582  940463 system_pods.go:89] "storage-provisioner" [c9bef27e-be02-4cf3-b384-c50517239a0e] Running
	I1229 07:58:00.331591  940463 system_pods.go:126] duration metric: took 17.564708ms to wait for k8s-apps to be running ...
	I1229 07:58:00.331600  940463 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:58:00.331665  940463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:58:00.361340  940463 system_svc.go:56] duration metric: took 29.729187ms WaitForService to wait for kubelet
	I1229 07:58:00.361375  940463 kubeadm.go:587] duration metric: took 6.984909163s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:58:00.361399  940463 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:58:00.365714  940463 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:58:00.365744  940463 node_conditions.go:123] node cpu capacity is 2
	I1229 07:58:00.365760  940463 node_conditions.go:105] duration metric: took 4.355413ms to run NodePressure ...
	I1229 07:58:00.365775  940463 start.go:242] waiting for startup goroutines ...
	I1229 07:58:00.365782  940463 start.go:247] waiting for cluster config update ...
	I1229 07:58:00.365794  940463 start.go:256] writing updated cluster config ...
	I1229 07:58:00.366098  940463 ssh_runner.go:195] Run: rm -f paused
	I1229 07:58:00.371665  940463 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:58:00.380846  940463 pod_ready.go:83] waiting for pod "coredns-7d764666f9-pcpsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:00.444902  943004 out.go:179] * [default-k8s-diff-port-358521] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:58:00.448159  943004 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:58:00.448244  943004 notify.go:221] Checking for updates...
	I1229 07:58:00.454342  943004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:58:00.457488  943004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:58:00.460638  943004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:58:00.463802  943004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:58:00.466952  943004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:58:00.470656  943004 config.go:182] Loaded profile config "embed-certs-664524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:58:00.470849  943004 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:58:00.520443  943004 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:58:00.520565  943004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:58:00.587828  943004 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:58:00.575784205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:58:00.587965  943004 docker.go:319] overlay module found
	I1229 07:58:00.591193  943004 out.go:179] * Using the docker driver based on user configuration
	I1229 07:58:00.594209  943004 start.go:309] selected driver: docker
	I1229 07:58:00.594268  943004 start.go:928] validating driver "docker" against <nil>
	I1229 07:58:00.594285  943004 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:58:00.595161  943004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:58:00.703379  943004 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:58:00.692611373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:58:00.703676  943004 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:58:00.703901  943004 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:58:00.706980  943004 out.go:179] * Using Docker driver with root privileges
	I1229 07:58:00.712172  943004 cni.go:84] Creating CNI manager for ""
	I1229 07:58:00.712247  943004 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:58:00.712264  943004 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:58:00.712431  943004 start.go:353] cluster config:
	{Name:default-k8s-diff-port-358521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:58:00.715708  943004 out.go:179] * Starting "default-k8s-diff-port-358521" primary control-plane node in "default-k8s-diff-port-358521" cluster
	I1229 07:58:00.718567  943004 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:58:00.721415  943004 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:58:00.724326  943004 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:58:00.724379  943004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:58:00.724407  943004 cache.go:65] Caching tarball of preloaded images
	I1229 07:58:00.724419  943004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:58:00.724505  943004 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:58:00.724516  943004 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:58:00.724644  943004 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/config.json ...
	I1229 07:58:00.724679  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/config.json: {Name:mk16cba71448c6c5e019904381a1c110a56ee972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:00.745476  943004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:58:00.745503  943004 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:58:00.745525  943004 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:58:00.745557  943004 start.go:360] acquireMachinesLock for default-k8s-diff-port-358521: {Name:mk006058b00af6cad673fa6451223fba67b65954 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:58:00.745673  943004 start.go:364] duration metric: took 93.35µs to acquireMachinesLock for "default-k8s-diff-port-358521"
	I1229 07:58:00.745704  943004 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-358521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:58:00.745775  943004 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:58:00.749382  943004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:58:00.749632  943004 start.go:159] libmachine.API.Create for "default-k8s-diff-port-358521" (driver="docker")
	I1229 07:58:00.749674  943004 client.go:173] LocalClient.Create starting
	I1229 07:58:00.749752  943004 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 07:58:00.749795  943004 main.go:144] libmachine: Decoding PEM data...
	I1229 07:58:00.749815  943004 main.go:144] libmachine: Parsing certificate...
	I1229 07:58:00.749875  943004 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 07:58:00.749898  943004 main.go:144] libmachine: Decoding PEM data...
	I1229 07:58:00.749911  943004 main.go:144] libmachine: Parsing certificate...
	I1229 07:58:00.750296  943004 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-358521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:58:00.774890  943004 cli_runner.go:211] docker network inspect default-k8s-diff-port-358521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:58:00.774971  943004 network_create.go:284] running [docker network inspect default-k8s-diff-port-358521] to gather additional debugging logs...
	I1229 07:58:00.774991  943004 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-358521
	W1229 07:58:00.796866  943004 cli_runner.go:211] docker network inspect default-k8s-diff-port-358521 returned with exit code 1
	I1229 07:58:00.796899  943004 network_create.go:287] error running [docker network inspect default-k8s-diff-port-358521]: docker network inspect default-k8s-diff-port-358521: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-358521 not found
	I1229 07:58:00.796913  943004 network_create.go:289] output of [docker network inspect default-k8s-diff-port-358521]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-358521 not found
	
	** /stderr **
	I1229 07:58:00.797030  943004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:58:00.816910  943004 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
	I1229 07:58:00.817322  943004 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce89f48ecd30 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:ee:74:81:93:1a} reservation:<nil>}
	I1229 07:58:00.817647  943004 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e2ff2b4ebfe IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:c7:24:a7:9b:5d} reservation:<nil>}
	I1229 07:58:00.818077  943004 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a054f0}
	I1229 07:58:00.818098  943004 network_create.go:124] attempt to create docker network default-k8s-diff-port-358521 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1229 07:58:00.818171  943004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-358521 default-k8s-diff-port-358521
	I1229 07:58:00.887552  943004 network_create.go:108] docker network default-k8s-diff-port-358521 192.168.76.0/24 created
	I1229 07:58:00.887589  943004 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-358521" container
	I1229 07:58:00.887682  943004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:58:00.905823  943004 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-358521 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-358521 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:58:00.934506  943004 oci.go:103] Successfully created a docker volume default-k8s-diff-port-358521
	I1229 07:58:00.934595  943004 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-358521-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-358521 --entrypoint /usr/bin/test -v default-k8s-diff-port-358521:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:58:01.550644  943004 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-358521
	I1229 07:58:01.550714  943004 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:58:01.550729  943004 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:58:01.550792  943004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-358521:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	W1229 07:58:02.387676  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:04.388571  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	I1229 07:58:06.491301  943004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-358521:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (4.940465298s)
	I1229 07:58:06.491329  943004 kic.go:203] duration metric: took 4.940597254s to extract preloaded images to volume ...
	W1229 07:58:06.491474  943004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:58:06.491568  943004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:58:06.584210  943004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-358521 --name default-k8s-diff-port-358521 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-358521 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-358521 --network default-k8s-diff-port-358521 --ip 192.168.76.2 --volume default-k8s-diff-port-358521:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:58:07.077008  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Running}}
	I1229 07:58:07.099204  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:58:07.118032  943004 cli_runner.go:164] Run: docker exec default-k8s-diff-port-358521 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:58:07.193498  943004 oci.go:144] the created container "default-k8s-diff-port-358521" has a running status.
	I1229 07:58:07.193535  943004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa...
	I1229 07:58:07.969565  943004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:58:07.990740  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:58:08.021883  943004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:58:08.021916  943004 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-358521 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:58:08.094410  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:58:08.115028  943004 machine.go:94] provisionDockerMachine start ...
	I1229 07:58:08.115210  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:08.138271  943004 main.go:144] libmachine: Using SSH client type: native
	I1229 07:58:08.138615  943004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33843 <nil> <nil>}
	I1229 07:58:08.138624  943004 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:58:08.140763  943004 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1229 07:58:06.901277  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:09.387888  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	I1229 07:58:11.308918  943004 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-358521
	
	I1229 07:58:11.308991  943004 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-358521"
	I1229 07:58:11.309094  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:11.328937  943004 main.go:144] libmachine: Using SSH client type: native
	I1229 07:58:11.329286  943004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33843 <nil> <nil>}
	I1229 07:58:11.329303  943004 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-358521 && echo "default-k8s-diff-port-358521" | sudo tee /etc/hostname
	I1229 07:58:11.499633  943004 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-358521
	
	I1229 07:58:11.499815  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:11.525765  943004 main.go:144] libmachine: Using SSH client type: native
	I1229 07:58:11.526076  943004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33843 <nil> <nil>}
	I1229 07:58:11.526105  943004 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-358521' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-358521/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-358521' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:58:11.697847  943004 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:58:11.697884  943004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:58:11.697933  943004 ubuntu.go:190] setting up certificates
	I1229 07:58:11.697948  943004 provision.go:84] configureAuth start
	I1229 07:58:11.698017  943004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-358521
	I1229 07:58:11.732076  943004 provision.go:143] copyHostCerts
	I1229 07:58:11.732480  943004 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:58:11.732500  943004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:58:11.732679  943004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:58:11.732919  943004 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:58:11.732930  943004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:58:11.732982  943004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:58:11.733068  943004 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:58:11.733075  943004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:58:11.733110  943004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:58:11.733173  943004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-358521 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-358521 localhost minikube]
	I1229 07:58:11.860396  943004 provision.go:177] copyRemoteCerts
	I1229 07:58:11.860481  943004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:58:11.860524  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:11.890136  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:12.007620  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1229 07:58:12.038619  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:58:12.064545  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:58:12.090417  943004 provision.go:87] duration metric: took 392.44628ms to configureAuth
	I1229 07:58:12.090460  943004 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:58:12.090670  943004 config.go:182] Loaded profile config "default-k8s-diff-port-358521": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:58:12.090800  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:12.114793  943004 main.go:144] libmachine: Using SSH client type: native
	I1229 07:58:12.115111  943004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33843 <nil> <nil>}
	I1229 07:58:12.115131  943004 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:58:12.452884  943004 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:58:12.452911  943004 machine.go:97] duration metric: took 4.33786402s to provisionDockerMachine
	I1229 07:58:12.452928  943004 client.go:176] duration metric: took 11.703236565s to LocalClient.Create
	I1229 07:58:12.452943  943004 start.go:167] duration metric: took 11.703313473s to libmachine.API.Create "default-k8s-diff-port-358521"
	I1229 07:58:12.452953  943004 start.go:293] postStartSetup for "default-k8s-diff-port-358521" (driver="docker")
	I1229 07:58:12.452967  943004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:58:12.453041  943004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:58:12.453102  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:12.478030  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:12.594091  943004 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:58:12.598093  943004 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:58:12.598132  943004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:58:12.598143  943004 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:58:12.598204  943004 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:58:12.598296  943004 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:58:12.598408  943004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:58:12.606086  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:58:12.623611  943004 start.go:296] duration metric: took 170.642286ms for postStartSetup
	I1229 07:58:12.623976  943004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-358521
	I1229 07:58:12.643929  943004 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/config.json ...
	I1229 07:58:12.644239  943004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:58:12.644330  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:12.668337  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:12.779725  943004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:58:12.785945  943004 start.go:128] duration metric: took 12.040148928s to createHost
	I1229 07:58:12.785972  943004 start.go:83] releasing machines lock for "default-k8s-diff-port-358521", held for 12.040283657s
	I1229 07:58:12.786053  943004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-358521
	I1229 07:58:12.821176  943004 ssh_runner.go:195] Run: cat /version.json
	I1229 07:58:12.821230  943004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:58:12.821238  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:12.821380  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:12.867102  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:12.882426  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:13.104207  943004 ssh_runner.go:195] Run: systemctl --version
	I1229 07:58:13.117643  943004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:58:13.185682  943004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:58:13.190571  943004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:58:13.190669  943004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:58:13.225196  943004 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:58:13.225239  943004 start.go:496] detecting cgroup driver to use...
	I1229 07:58:13.225271  943004 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:58:13.225324  943004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:58:13.249725  943004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:58:13.270584  943004 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:58:13.270660  943004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:58:13.296068  943004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:58:13.322076  943004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:58:13.524060  943004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:58:13.700001  943004 docker.go:234] disabling docker service ...
	I1229 07:58:13.700079  943004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:58:13.732720  943004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:58:13.747648  943004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:58:13.904095  943004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:58:14.071722  943004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:58:14.088344  943004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:58:14.108301  943004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:58:14.108422  943004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.117994  943004 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:58:14.118121  943004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.130583  943004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.143583  943004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.157424  943004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:58:14.177211  943004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.186698  943004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.209360  943004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.219410  943004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:58:14.228338  943004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:58:14.237960  943004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:58:14.395556  943004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:58:14.844715  943004 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:58:14.844884  943004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:58:14.851930  943004 start.go:574] Will wait 60s for crictl version
	I1229 07:58:14.852051  943004 ssh_runner.go:195] Run: which crictl
	I1229 07:58:14.863775  943004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:58:14.904608  943004 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:58:14.904763  943004 ssh_runner.go:195] Run: crio --version
	I1229 07:58:14.933035  943004 ssh_runner.go:195] Run: crio --version
	I1229 07:58:14.966895  943004 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:58:14.969878  943004 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-358521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:58:14.986081  943004 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1229 07:58:14.989978  943004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:58:14.999784  943004 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-358521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:58:14.999915  943004 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:58:14.999976  943004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:58:15.048428  943004 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:58:15.048455  943004 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:58:15.048514  943004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:58:15.076753  943004 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:58:15.076777  943004 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:58:15.076819  943004 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.35.0 crio true true} ...
	I1229 07:58:15.076921  943004 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-358521 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:58:15.077005  943004 ssh_runner.go:195] Run: crio config
	I1229 07:58:15.144559  943004 cni.go:84] Creating CNI manager for ""
	I1229 07:58:15.144593  943004 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:58:15.144610  943004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:58:15.144635  943004 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-358521 NodeName:default-k8s-diff-port-358521 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:58:15.144826  943004 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-358521"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:58:15.144912  943004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:58:15.153017  943004 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:58:15.153096  943004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:58:15.161106  943004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1229 07:58:15.175332  943004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:58:15.188449  943004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1229 07:58:15.201730  943004 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:58:15.205439  943004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:58:15.215071  943004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:58:15.335456  943004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:58:15.354079  943004 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521 for IP: 192.168.76.2
	I1229 07:58:15.354146  943004 certs.go:195] generating shared ca certs ...
	I1229 07:58:15.354177  943004 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.354351  943004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:58:15.354433  943004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:58:15.354459  943004 certs.go:257] generating profile certs ...
	I1229 07:58:15.354533  943004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.key
	I1229 07:58:15.354569  943004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt with IP's: []
	W1229 07:58:11.897225  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:14.386904  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	I1229 07:58:15.587271  943004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt ...
	I1229 07:58:15.587307  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: {Name:mk00149d860648af75f7db82a1f663a0e5cf0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.587520  943004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.key ...
	I1229 07:58:15.587537  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.key: {Name:mk2b1a3440df07047bf5a540398a51c93c591f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.587634  943004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key.22f9918a
	I1229 07:58:15.587652  943004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.crt.22f9918a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1229 07:58:15.826613  943004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.crt.22f9918a ...
	I1229 07:58:15.826646  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.crt.22f9918a: {Name:mkc4c9e2f2b281bef7b33b743b8595fadd4b3579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.828224  943004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key.22f9918a ...
	I1229 07:58:15.828301  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key.22f9918a: {Name:mkce1814bd12f69c9167052b091b402d53c4dfbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.828558  943004 certs.go:382] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.crt.22f9918a -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.crt
	I1229 07:58:15.828728  943004 certs.go:386] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key.22f9918a -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key
	I1229 07:58:15.828891  943004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.key
	I1229 07:58:15.828950  943004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.crt with IP's: []
	I1229 07:58:15.939481  943004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.crt ...
	I1229 07:58:15.939514  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.crt: {Name:mkc3c4f5cf1af7486aea1e8d1c97a70f236aa904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.939707  943004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.key ...
	I1229 07:58:15.939722  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.key: {Name:mkdb4eb6351e76c37d03a5beb250f62df8a8e7ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.939932  943004 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:58:15.939983  943004 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:58:15.940006  943004 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:58:15.940035  943004 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:58:15.940062  943004 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:58:15.940088  943004 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:58:15.940141  943004 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:58:15.940720  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:58:15.963286  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:58:15.985567  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:58:16.007580  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:58:16.029142  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1229 07:58:16.048015  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:58:16.067950  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:58:16.086974  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:58:16.111058  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:58:16.128678  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:58:16.146871  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:58:16.165808  943004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:58:16.179408  943004 ssh_runner.go:195] Run: openssl version
	I1229 07:58:16.185694  943004 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:58:16.193807  943004 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:58:16.202587  943004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:58:16.207329  943004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:58:16.207443  943004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:58:16.251291  943004 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:58:16.259269  943004 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:58:16.266857  943004 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:58:16.275562  943004 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:58:16.283276  943004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:58:16.287281  943004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:58:16.287363  943004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:58:16.329530  943004 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:58:16.337532  943004 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/741854.pem /etc/ssl/certs/51391683.0
	I1229 07:58:16.345178  943004 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:58:16.353009  943004 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:58:16.360946  943004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:58:16.364954  943004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:58:16.365072  943004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:58:16.407945  943004 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:58:16.415798  943004 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7418542.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:58:16.423407  943004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:58:16.427007  943004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:58:16.427061  943004 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-358521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:58:16.427155  943004 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:58:16.427218  943004 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:58:16.455848  943004 cri.go:96] found id: ""
	I1229 07:58:16.455922  943004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:58:16.463571  943004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:58:16.471468  943004 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:58:16.471534  943004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:58:16.479988  943004 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:58:16.480060  943004 kubeadm.go:158] found existing configuration files:
	
	I1229 07:58:16.480144  943004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1229 07:58:16.488144  943004 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:58:16.488220  943004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:58:16.495483  943004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1229 07:58:16.503026  943004 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:58:16.503096  943004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:58:16.510687  943004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1229 07:58:16.518957  943004 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:58:16.519073  943004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:58:16.527033  943004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1229 07:58:16.534806  943004 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:58:16.534921  943004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:58:16.542420  943004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:58:16.579276  943004 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:58:16.579441  943004 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:58:16.656618  943004 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:58:16.656707  943004 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:58:16.656756  943004 kubeadm.go:319] OS: Linux
	I1229 07:58:16.656876  943004 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:58:16.656978  943004 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:58:16.657064  943004 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:58:16.657148  943004 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:58:16.657255  943004 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:58:16.657326  943004 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:58:16.657400  943004 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:58:16.657469  943004 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:58:16.657539  943004 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:58:16.733879  943004 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:58:16.734054  943004 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:58:16.734180  943004 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:58:16.745194  943004 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:58:16.751508  943004 out.go:252]   - Generating certificates and keys ...
	I1229 07:58:16.751686  943004 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:58:16.751827  943004 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:58:17.023503  943004 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:58:17.710826  943004 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:58:18.569063  943004 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:58:18.683921  943004 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:58:18.798531  943004 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:58:18.798935  943004 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-358521 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:58:18.991076  943004 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:58:18.991381  943004 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-358521 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:58:19.441547  943004 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:58:19.574253  943004 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:58:19.815013  943004 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:58:19.815332  943004 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:58:20.186388  943004 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	W1229 07:58:16.387867  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:18.390453  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	I1229 07:58:20.478974  943004 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:58:20.666938  943004 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:58:20.886606  943004 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:58:21.205559  943004 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:58:21.206321  943004 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:58:21.210696  943004 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:58:21.214094  943004 out.go:252]   - Booting up control plane ...
	I1229 07:58:21.214200  943004 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:58:21.214284  943004 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:58:21.214355  943004 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:58:21.231138  943004 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:58:21.231432  943004 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:58:21.240121  943004 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:58:21.240466  943004 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:58:21.240653  943004 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:58:21.375181  943004 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:58:21.375306  943004 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:58:22.377017  943004 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001932806s
	I1229 07:58:22.381484  943004 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 07:58:22.383193  943004 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1229 07:58:22.383302  943004 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 07:58:22.383386  943004 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 07:58:23.892762  943004 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.509987181s
	W1229 07:58:20.888722  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:22.889215  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:25.387364  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	I1229 07:58:25.833617  943004 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.451321587s
	I1229 07:58:27.887513  943004 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.504065067s
	I1229 07:58:27.922449  943004 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:58:27.939001  943004 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:58:27.954876  943004 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:58:27.955113  943004 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-358521 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:58:27.968710  943004 kubeadm.go:319] [bootstrap-token] Using token: 9qe0ci.3tjt6vxlx8dvskrk
	I1229 07:58:27.971888  943004 out.go:252]   - Configuring RBAC rules ...
	I1229 07:58:27.972012  943004 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:58:27.976095  943004 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:58:27.984741  943004 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:58:27.991113  943004 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:58:27.995125  943004 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:58:27.998970  943004 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:58:28.297496  943004 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:58:28.721120  943004 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:58:29.301866  943004 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:58:29.303114  943004 kubeadm.go:319] 
	I1229 07:58:29.303193  943004 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:58:29.303201  943004 kubeadm.go:319] 
	I1229 07:58:29.303278  943004 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:58:29.303285  943004 kubeadm.go:319] 
	I1229 07:58:29.303310  943004 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:58:29.303374  943004 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:58:29.303428  943004 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:58:29.303436  943004 kubeadm.go:319] 
	I1229 07:58:29.303496  943004 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:58:29.303505  943004 kubeadm.go:319] 
	I1229 07:58:29.303553  943004 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:58:29.303561  943004 kubeadm.go:319] 
	I1229 07:58:29.303612  943004 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:58:29.303691  943004 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:58:29.303764  943004 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:58:29.303772  943004 kubeadm.go:319] 
	I1229 07:58:29.303858  943004 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:58:29.303938  943004 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:58:29.303946  943004 kubeadm.go:319] 
	I1229 07:58:29.304030  943004 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 9qe0ci.3tjt6vxlx8dvskrk \
	I1229 07:58:29.304137  943004 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 \
	I1229 07:58:29.304160  943004 kubeadm.go:319] 	--control-plane 
	I1229 07:58:29.304175  943004 kubeadm.go:319] 
	I1229 07:58:29.304259  943004 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:58:29.304267  943004 kubeadm.go:319] 
	I1229 07:58:29.304350  943004 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 9qe0ci.3tjt6vxlx8dvskrk \
	I1229 07:58:29.304456  943004 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 
	I1229 07:58:29.308526  943004 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:58:29.308961  943004 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:58:29.309075  943004 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:58:29.309095  943004 cni.go:84] Creating CNI manager for ""
	I1229 07:58:29.309104  943004 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:58:29.312262  943004 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:58:29.315167  943004 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:58:29.319557  943004 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:58:29.319575  943004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:58:29.340518  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:58:29.726708  943004 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:58:29.726845  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:29.726910  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-358521 minikube.k8s.io/updated_at=2025_12_29T07_58_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=default-k8s-diff-port-358521 minikube.k8s.io/primary=true
	I1229 07:58:30.085881  943004 ops.go:34] apiserver oom_adj: -16
	I1229 07:58:30.086004  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1229 07:58:27.887040  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:29.891974  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	I1229 07:58:31.886582  940463 pod_ready.go:94] pod "coredns-7d764666f9-pcpsf" is "Ready"
	I1229 07:58:31.886610  940463 pod_ready.go:86] duration metric: took 31.505734496s for pod "coredns-7d764666f9-pcpsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:31.889695  940463 pod_ready.go:83] waiting for pod "etcd-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:31.894744  940463 pod_ready.go:94] pod "etcd-embed-certs-664524" is "Ready"
	I1229 07:58:31.894771  940463 pod_ready.go:86] duration metric: took 5.047636ms for pod "etcd-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:31.897365  940463 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:31.902672  940463 pod_ready.go:94] pod "kube-apiserver-embed-certs-664524" is "Ready"
	I1229 07:58:31.902702  940463 pod_ready.go:86] duration metric: took 5.313837ms for pod "kube-apiserver-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:31.905185  940463 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:32.084839  940463 pod_ready.go:94] pod "kube-controller-manager-embed-certs-664524" is "Ready"
	I1229 07:58:32.084869  940463 pod_ready.go:86] duration metric: took 179.65786ms for pod "kube-controller-manager-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:32.285272  940463 pod_ready.go:83] waiting for pod "kube-proxy-tdfv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:32.684541  940463 pod_ready.go:94] pod "kube-proxy-tdfv8" is "Ready"
	I1229 07:58:32.684573  940463 pod_ready.go:86] duration metric: took 399.271615ms for pod "kube-proxy-tdfv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:32.884898  940463 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:33.284829  940463 pod_ready.go:94] pod "kube-scheduler-embed-certs-664524" is "Ready"
	I1229 07:58:33.284854  940463 pod_ready.go:86] duration metric: took 399.931428ms for pod "kube-scheduler-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:33.284865  940463 pod_ready.go:40] duration metric: took 32.913153665s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:58:33.375036  940463 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1229 07:58:33.378202  940463 out.go:203] 
	W1229 07:58:33.381074  940463 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1229 07:58:33.384150  940463 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:58:33.387149  940463 out.go:179] * Done! kubectl is now configured to use "embed-certs-664524" cluster and "default" namespace by default
	I1229 07:58:30.586126  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:31.086398  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:31.586704  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:32.086008  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:32.586534  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:33.086180  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:33.586771  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:33.730886  943004 kubeadm.go:1114] duration metric: took 4.004092687s to wait for elevateKubeSystemPrivileges
	I1229 07:58:33.730915  943004 kubeadm.go:403] duration metric: took 17.303860187s to StartCluster
	I1229 07:58:33.730933  943004 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:33.730990  943004 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:58:33.732462  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:33.732708  943004 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:58:33.732995  943004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:58:33.733273  943004 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:58:33.733357  943004 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-358521"
	I1229 07:58:33.733372  943004 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-358521"
	I1229 07:58:33.733397  943004 host.go:66] Checking if "default-k8s-diff-port-358521" exists ...
	I1229 07:58:33.733440  943004 config.go:182] Loaded profile config "default-k8s-diff-port-358521": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:58:33.733480  943004 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-358521"
	I1229 07:58:33.733496  943004 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-358521"
	I1229 07:58:33.734013  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:58:33.734490  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:58:33.741149  943004 out.go:179] * Verifying Kubernetes components...
	I1229 07:58:33.747420  943004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:58:33.787006  943004 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-358521"
	I1229 07:58:33.787051  943004 host.go:66] Checking if "default-k8s-diff-port-358521" exists ...
	I1229 07:58:33.787629  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:58:33.837845  943004 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:58:33.840882  943004 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:58:33.840912  943004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:58:33.840982  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:33.926385  943004 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:58:33.926407  943004 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:58:33.926466  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:33.932979  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:33.966278  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:34.198864  943004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:58:34.207212  943004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:58:34.248907  943004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:58:34.339319  943004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:58:34.604281  943004 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1229 07:58:34.606811  943004 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-358521" to be "Ready" ...
	I1229 07:58:35.111322  943004 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-358521" context rescaled to 1 replicas
	I1229 07:58:35.216860  943004 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:58:35.219707  943004 addons.go:530] duration metric: took 1.486426611s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1229 07:58:36.616256  943004 node_ready.go:57] node "default-k8s-diff-port-358521" has "Ready":"False" status (will retry)
	W1229 07:58:39.110511  943004 node_ready.go:57] node "default-k8s-diff-port-358521" has "Ready":"False" status (will retry)
	W1229 07:58:41.609544  943004 node_ready.go:57] node "default-k8s-diff-port-358521" has "Ready":"False" status (will retry)
	W1229 07:58:43.610193  943004 node_ready.go:57] node "default-k8s-diff-port-358521" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 29 07:58:26 embed-certs-664524 crio[663]: time="2025-12-29T07:58:26.855750374Z" level=info msg="Error loading conmon cgroup of container 278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276: cgroup deleted" id=459f23b7-b87a-4bcf-ba67-493e009835ba name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:58:26 embed-certs-664524 crio[663]: time="2025-12-29T07:58:26.861407647Z" level=info msg="Removed container 278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w/dashboard-metrics-scraper" id=459f23b7-b87a-4bcf-ba67-493e009835ba name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:58:29 embed-certs-664524 conmon[1197]: conmon 7539540a5a2b81108d15 <ninfo>: container 1200 exited with status 1
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.870249351Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b13faf0b-e67b-405d-8bfd-df1cddcf5f49 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.871513361Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=69b84f50-1b4c-42d1-a5e8-7a502a37062f name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.887916733Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=92f71636-197c-4f49-8bc1-a68756c4baf9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.888037407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.89523948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.895418771Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2b3c184c68a14dfb88ffb1870a95ccbf75e3ffe55727bb2e48330e3e10c4315c/merged/etc/passwd: no such file or directory"
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.895441401Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2b3c184c68a14dfb88ffb1870a95ccbf75e3ffe55727bb2e48330e3e10c4315c/merged/etc/group: no such file or directory"
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.895689715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.925662871Z" level=info msg="Created container a1e706703f3776fe261699567df5bdac64ac355c0c33633f5f097336d36f805c: kube-system/storage-provisioner/storage-provisioner" id=92f71636-197c-4f49-8bc1-a68756c4baf9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.92792153Z" level=info msg="Starting container: a1e706703f3776fe261699567df5bdac64ac355c0c33633f5f097336d36f805c" id=304e31a4-fcb4-4def-8579-0691eedc670d name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.930102904Z" level=info msg="Started container" PID=1685 containerID=a1e706703f3776fe261699567df5bdac64ac355c0c33633f5f097336d36f805c description=kube-system/storage-provisioner/storage-provisioner id=304e31a4-fcb4-4def-8579-0691eedc670d name=/runtime.v1.RuntimeService/StartContainer sandboxID=e091512d07293f5279f4e14540f0606be0f0237f7ce6a97d228d6dea4276155e
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.496812505Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.496860153Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.501627147Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.501783111Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.506318243Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.506354042Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.510649877Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.510685291Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.510712245Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.514899682Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.514938181Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a1e706703f377       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   e091512d07293       storage-provisioner                          kube-system
	7c126b6a48dd7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   13821135b2816       dashboard-metrics-scraper-867fb5f87b-7cf5w   kubernetes-dashboard
	924f3c959e2ad       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago      Running             kubernetes-dashboard        0                   79da333ad802d       kubernetes-dashboard-b84665fb8-brfwf         kubernetes-dashboard
	dc3bc9b218eef       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           49 seconds ago      Running             coredns                     1                   fb7afb134b053       coredns-7d764666f9-pcpsf                     kube-system
	7539540a5a2b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   e091512d07293       storage-provisioner                          kube-system
	eac02079d70a1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   dac163038ba64       busybox                                      default
	482c99472f92c       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           49 seconds ago      Running             kindnet-cni                 1                   8ff92f554b20f       kindnet-c8jvc                                kube-system
	b01da39edeea7       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           49 seconds ago      Running             kube-proxy                  1                   fc65016b801ea       kube-proxy-tdfv8                             kube-system
	654e52c22fa70       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           55 seconds ago      Running             kube-apiserver              1                   2798bc4a2bec4       kube-apiserver-embed-certs-664524            kube-system
	d33e8a566872e       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           55 seconds ago      Running             etcd                        1                   db2bc6e357912       etcd-embed-certs-664524                      kube-system
	f4bd112a2b0e2       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           55 seconds ago      Running             kube-controller-manager     1                   ddf34b253e939       kube-controller-manager-embed-certs-664524   kube-system
	7de23a4601038       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           55 seconds ago      Running             kube-scheduler              1                   26bf7c0fcc110       kube-scheduler-embed-certs-664524            kube-system
	
	
	==> coredns [dc3bc9b218eefd1dd5a0fb53fa0c8e4db9142d53881da74343b285d0d05b6758] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46328 - 14024 "HINFO IN 9059781156673576252.6334997300942418132. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.067086486s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-664524
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-664524
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=embed-certs-664524
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_57_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:56:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-664524
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:58:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:58:28 +0000   Mon, 29 Dec 2025 07:56:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:58:28 +0000   Mon, 29 Dec 2025 07:56:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:58:28 +0000   Mon, 29 Dec 2025 07:56:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:58:28 +0000   Mon, 29 Dec 2025 07:57:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-664524
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                aa87aa83-06f4-4a87-8b15-861c15da3cbf
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-7d764666f9-pcpsf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     102s
	  kube-system                 etcd-embed-certs-664524                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         107s
	  kube-system                 kindnet-c8jvc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-embed-certs-664524             250m (12%)    0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-embed-certs-664524    200m (10%)    0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-tdfv8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-embed-certs-664524             100m (5%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7cf5w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-brfwf          0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  103s  node-controller  Node embed-certs-664524 event: Registered Node embed-certs-664524 in Controller
	  Normal  RegisteredNode  47s   node-controller  Node embed-certs-664524 event: Registered Node embed-certs-664524 in Controller
	
	
	==> dmesg <==
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	[Dec29 07:54] overlayfs: idmapped layers are currently not supported
	[Dec29 07:55] overlayfs: idmapped layers are currently not supported
	[Dec29 07:56] overlayfs: idmapped layers are currently not supported
	[Dec29 07:57] overlayfs: idmapped layers are currently not supported
	[Dec29 07:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d33e8a566872e68896c0f3f3ed489834fbd96480e02e43b56550ca6df569cf2e] <==
	{"level":"info","ts":"2025-12-29T07:57:53.428554Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:57:53.428674Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:57:53.447213Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:57:53.447334Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:57:53.424777Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:57:53.449283Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:57:54.153181Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:57:54.153226Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:57:54.153286Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:57:54.153298Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:57:54.153312Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:57:54.154363Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:57:54.154397Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:57:54.154415Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:57:54.154425Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:57:54.157068Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-664524 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:57:54.157227Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:57:54.158141Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:57:54.159927Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:57:54.160805Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:57:54.185791Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:57:54.197865Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:57:54.198561Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:57:54.199296Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"warn","ts":"2025-12-29T07:57:55.123911Z","caller":"embed/config_logging.go:194","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52442","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:52442: read: connection reset by peer"}
	
	
	==> kernel <==
	 07:58:48 up  4:41,  0 user,  load average: 2.83, 2.09, 2.27
	Linux embed-certs-664524 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [482c99472f92c834414b043b80388d6641f45c36fa36354051c42a85cd7b4c40] <==
	I1229 07:57:59.269928       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:57:59.270138       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:57:59.270272       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:57:59.270284       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:57:59.270293       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:57:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:57:59.493628       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:57:59.493660       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:57:59.493671       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:57:59.494137       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1229 07:58:29.483104       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1229 07:58:29.494625       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1229 07:58:29.494737       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1229 07:58:29.506197       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1229 07:58:31.094273       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:58:31.094316       1 metrics.go:72] Registering metrics
	I1229 07:58:31.094380       1 controller.go:711] "Syncing nftables rules"
	I1229 07:58:39.487916       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:58:39.487967       1 main.go:301] handling current node
	
	
	==> kube-apiserver [654e52c22fa70936846b530c44f221ec218a752ca1cc7140e9422de0ec6628d3] <==
	I1229 07:57:58.077044       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:58.084181       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1229 07:57:58.084399       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1229 07:57:58.084408       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1229 07:57:58.110100       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1229 07:57:58.156224       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:57:58.172098       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:57:58.173423       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:57:58.183726       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:57:58.183862       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:57:58.183945       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1229 07:57:58.184319       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:57:58.197598       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:57:58.197707       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:57:58.393374       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:57:58.697525       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:57:59.477985       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:57:59.660173       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:57:59.799059       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:57:59.854030       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:58:00.179205       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.179.37"}
	I1229 07:58:00.228236       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.174.87"}
	I1229 07:58:01.370885       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:58:01.634815       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:58:01.682188       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f4bd112a2b0e2eb9aca6d9b1d6032fd1d77abfb043c5af20060885edb1a04b53] <==
	I1229 07:58:01.064060       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064072       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064080       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064087       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.070647       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081296       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081556       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081579       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081783       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081838       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081874       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081920       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.082033       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.082057       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.082324       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064093       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064100       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064106       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064112       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.100446       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.132057       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.155679       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.155711       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:58:01.155718       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:58:01.718201       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [b01da39edeea753e83c2d146e6837a58dff702fec6e5bdb574134e1badac68e8] <==
	I1229 07:57:59.431541       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:57:59.674939       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:57:59.796924       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:59.796971       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1229 07:57:59.797066       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:58:00.226289       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:58:00.226356       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:58:00.356062       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:58:00.356708       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:58:00.356726       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:58:00.398004       1 config.go:200] "Starting service config controller"
	I1229 07:58:00.484577       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:58:00.456874       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:58:00.484739       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:58:00.456918       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:58:00.484840       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:58:00.484891       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:58:00.465679       1 config.go:309] "Starting node config controller"
	I1229 07:58:00.484956       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:58:00.484991       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:58:00.585481       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:58:00.585487       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7de23a46010383a4bcead63ecc65f9f99043b4ec3bb3d680e0839a2c53de7654] <==
	I1229 07:57:54.605948       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:57:57.453321       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:57:57.453362       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:57:57.453388       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:57:57.453396       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:57:57.946846       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:57:57.971247       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:57:58.063722       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:57:58.069985       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:57:58.070123       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:57:58.070010       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:57:58.181203       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:58:14 embed-certs-664524 kubelet[790]: I1229 07:58:14.813423     790 scope.go:122] "RemoveContainer" containerID="c5d4565f52d427a8b7bd40938f4b8b654e8e7909586140f1f3ad2027a7530b37"
	Dec 29 07:58:15 embed-certs-664524 kubelet[790]: E1229 07:58:15.817093     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" containerName="dashboard-metrics-scraper"
	Dec 29 07:58:15 embed-certs-664524 kubelet[790]: I1229 07:58:15.817621     790 scope.go:122] "RemoveContainer" containerID="278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276"
	Dec 29 07:58:15 embed-certs-664524 kubelet[790]: E1229 07:58:15.817871     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7cf5w_kubernetes-dashboard(bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" podUID="bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b"
	Dec 29 07:58:15 embed-certs-664524 kubelet[790]: I1229 07:58:15.818349     790 scope.go:122] "RemoveContainer" containerID="c5d4565f52d427a8b7bd40938f4b8b654e8e7909586140f1f3ad2027a7530b37"
	Dec 29 07:58:16 embed-certs-664524 kubelet[790]: E1229 07:58:16.820882     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" containerName="dashboard-metrics-scraper"
	Dec 29 07:58:16 embed-certs-664524 kubelet[790]: I1229 07:58:16.820922     790 scope.go:122] "RemoveContainer" containerID="278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276"
	Dec 29 07:58:16 embed-certs-664524 kubelet[790]: E1229 07:58:16.821077     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7cf5w_kubernetes-dashboard(bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" podUID="bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b"
	Dec 29 07:58:22 embed-certs-664524 kubelet[790]: E1229 07:58:22.585851     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" containerName="dashboard-metrics-scraper"
	Dec 29 07:58:22 embed-certs-664524 kubelet[790]: I1229 07:58:22.585903     790 scope.go:122] "RemoveContainer" containerID="278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276"
	Dec 29 07:58:22 embed-certs-664524 kubelet[790]: E1229 07:58:22.586098     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7cf5w_kubernetes-dashboard(bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" podUID="bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b"
	Dec 29 07:58:26 embed-certs-664524 kubelet[790]: E1229 07:58:26.552721     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" containerName="dashboard-metrics-scraper"
	Dec 29 07:58:26 embed-certs-664524 kubelet[790]: I1229 07:58:26.553629     790 scope.go:122] "RemoveContainer" containerID="278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276"
	Dec 29 07:58:26 embed-certs-664524 kubelet[790]: I1229 07:58:26.846359     790 scope.go:122] "RemoveContainer" containerID="278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276"
	Dec 29 07:58:26 embed-certs-664524 kubelet[790]: E1229 07:58:26.846826     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" containerName="dashboard-metrics-scraper"
	Dec 29 07:58:26 embed-certs-664524 kubelet[790]: I1229 07:58:26.847087     790 scope.go:122] "RemoveContainer" containerID="7c126b6a48dd7c6f938f3cc35158fa83091d03729f40aa2d7f66e9d1798a436a"
	Dec 29 07:58:26 embed-certs-664524 kubelet[790]: E1229 07:58:26.847766     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7cf5w_kubernetes-dashboard(bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" podUID="bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b"
	Dec 29 07:58:29 embed-certs-664524 kubelet[790]: I1229 07:58:29.868291     790 scope.go:122] "RemoveContainer" containerID="7539540a5a2b81108d156800d836cce77a98266438d5637aa05f90fbec56968f"
	Dec 29 07:58:31 embed-certs-664524 kubelet[790]: E1229 07:58:31.743964     790 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-pcpsf" containerName="coredns"
	Dec 29 07:58:32 embed-certs-664524 kubelet[790]: E1229 07:58:32.585303     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" containerName="dashboard-metrics-scraper"
	Dec 29 07:58:32 embed-certs-664524 kubelet[790]: I1229 07:58:32.585345     790 scope.go:122] "RemoveContainer" containerID="7c126b6a48dd7c6f938f3cc35158fa83091d03729f40aa2d7f66e9d1798a436a"
	Dec 29 07:58:32 embed-certs-664524 kubelet[790]: E1229 07:58:32.585534     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7cf5w_kubernetes-dashboard(bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" podUID="bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b"
	Dec 29 07:58:45 embed-certs-664524 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:58:46 embed-certs-664524 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:58:46 embed-certs-664524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [924f3c959e2ad4291d69a58bd2fdba2b25cbc103d702c8ba5175b57b4d74da63] <==
	2025/12/29 07:58:09 Starting overwatch
	2025/12/29 07:58:09 Using namespace: kubernetes-dashboard
	2025/12/29 07:58:09 Using in-cluster config to connect to apiserver
	2025/12/29 07:58:09 Using secret token for csrf signing
	2025/12/29 07:58:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:58:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:58:09 Successful initial request to the apiserver, version: v1.35.0
	2025/12/29 07:58:09 Generating JWE encryption key
	2025/12/29 07:58:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:58:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:58:10 Initializing JWE encryption key from synchronized object
	2025/12/29 07:58:10 Creating in-cluster Sidecar client
	2025/12/29 07:58:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:58:10 Serving insecurely on HTTP port: 9090
	2025/12/29 07:58:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7539540a5a2b81108d156800d836cce77a98266438d5637aa05f90fbec56968f] <==
	I1229 07:57:59.456760       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:58:29.464339       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a1e706703f3776fe261699567df5bdac64ac355c0c33633f5f097336d36f805c] <==
	I1229 07:58:29.960492       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:58:29.980268       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:58:29.980399       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:58:29.983014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:33.458181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:37.718757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:41.316986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:44.371073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:47.395928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:47.404054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:58:47.404341       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:58:47.404578       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-664524_8e1b0654-ef3f-4bec-aef6-f9ac06a916dc!
	I1229 07:58:47.406366       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3779bf10-bb43-4dfc-bba7-8f9e72c926a6", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-664524_8e1b0654-ef3f-4bec-aef6-f9ac06a916dc became leader
	W1229 07:58:47.418567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:47.423656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:58:47.511914       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-664524_8e1b0654-ef3f-4bec-aef6-f9ac06a916dc!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-664524 -n embed-certs-664524
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-664524 -n embed-certs-664524: exit status 2 (419.613107ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-664524 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-664524
helpers_test.go:244: (dbg) docker inspect embed-certs-664524:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1",
	        "Created": "2025-12-29T07:56:41.560040966Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 940596,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:57:45.731430628Z",
	            "FinishedAt": "2025-12-29T07:57:44.804058002Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/hosts",
	        "LogPath": "/var/lib/docker/containers/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1/ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1-json.log",
	        "Name": "/embed-certs-664524",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-664524:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-664524",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ece150c04980a11408d1e312fd2e3ee1740264a2702a572bc46ecd02caa3f2a1",
	                "LowerDir": "/var/lib/docker/overlay2/5df02a147b0320621df1a2e8182e4dde46954dd5e1dd1a32299637c80ec0f91b-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5df02a147b0320621df1a2e8182e4dde46954dd5e1dd1a32299637c80ec0f91b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5df02a147b0320621df1a2e8182e4dde46954dd5e1dd1a32299637c80ec0f91b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5df02a147b0320621df1a2e8182e4dde46954dd5e1dd1a32299637c80ec0f91b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-664524",
	                "Source": "/var/lib/docker/volumes/embed-certs-664524/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-664524",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-664524",
	                "name.minikube.sigs.k8s.io": "embed-certs-664524",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e60b93a129bb746ce17b88e73bb23ac02439bb2f3ba87312a0ca82675f108f70",
	            "SandboxKey": "/var/run/docker/netns/e60b93a129bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33838"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33839"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33842"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33840"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33841"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-664524": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:b9:68:b3:75:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b2572828bf01c704dfae705c3731b5490ef1a8c7071b9170453d61e871f33f10",
	                    "EndpointID": "d9178428ffb2c37ad5560ab713f826cdd19868eb552dc23c9916036388fbe329",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-664524",
	                        "ece150c04980"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-664524 -n embed-certs-664524
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-664524 -n embed-certs-664524: exit status 2 (376.53466ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-664524 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-664524 logs -n 25: (1.487590697s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-512867       │ jenkins │ v1.37.0 │ 29 Dec 25 07:53 UTC │ 29 Dec 25 07:53 UTC │
	│ image   │ old-k8s-version-512867 image list --format=json                                                                                                                                                                                               │ old-k8s-version-512867       │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ pause   │ -p old-k8s-version-512867 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-512867       │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │                     │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867       │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867       │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-822299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │                     │
	│ stop    │ -p no-preload-822299 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable dashboard -p no-preload-822299 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:56 UTC │
	│ image   │ no-preload-822299 image list --format=json                                                                                                                                                                                                    │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ pause   │ -p no-preload-822299 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │                     │
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-664524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │                     │
	│ stop    │ -p embed-certs-664524 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-664524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:58 UTC │
	│ ssh     │ force-systemd-flag-122604 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-122604    │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ delete  │ -p force-systemd-flag-122604                                                                                                                                                                                                                  │ force-systemd-flag-122604    │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ delete  │ -p disable-driver-mounts-062900                                                                                                                                                                                                               │ disable-driver-mounts-062900 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ start   │ -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ image   │ embed-certs-664524 image list --format=json                                                                                                                                                                                                   │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ pause   │ -p embed-certs-664524 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:58:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:58:00.431236  943004 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:58:00.431490  943004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:58:00.431522  943004 out.go:374] Setting ErrFile to fd 2...
	I1229 07:58:00.431544  943004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:58:00.431901  943004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:58:00.432501  943004 out.go:368] Setting JSON to false
	I1229 07:58:00.437873  943004 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16832,"bootTime":1766978248,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:58:00.438010  943004 start.go:143] virtualization:  
	I1229 07:58:00.272372  940463 addons.go:530] duration metric: took 6.892514577s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1229 07:58:00.289826  940463 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1229 07:58:00.293126  940463 api_server.go:141] control plane version: v1.35.0
	I1229 07:58:00.293159  940463 api_server.go:131] duration metric: took 34.57797ms to wait for apiserver health ...
	I1229 07:58:00.293169  940463 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:58:00.304329  940463 system_pods.go:59] 8 kube-system pods found
	I1229 07:58:00.304380  940463 system_pods.go:61] "coredns-7d764666f9-pcpsf" [2a4096fa-c1ce-4b41-b821-9a1a1e27d301] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:58:00.304393  940463 system_pods.go:61] "etcd-embed-certs-664524" [cfe4eb21-b2cd-4115-9ffe-603e2cbf0c30] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:58:00.304402  940463 system_pods.go:61] "kindnet-c8jvc" [103ef0e3-8d0d-4bbc-a430-92b7c002114c] Running
	I1229 07:58:00.304410  940463 system_pods.go:61] "kube-apiserver-embed-certs-664524" [067d055c-671c-4e2c-a927-efc4fbb42836] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:58:00.304420  940463 system_pods.go:61] "kube-controller-manager-embed-certs-664524" [70842a82-e0c6-44df-93b6-46c73923df5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:58:00.304425  940463 system_pods.go:61] "kube-proxy-tdfv8" [69aab033-b5bf-44e8-96c0-20f616cd77a2] Running
	I1229 07:58:00.304433  940463 system_pods.go:61] "kube-scheduler-embed-certs-664524" [f9ed8fe2-ebf0-47ea-9b7e-a533fc97aedb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:58:00.304438  940463 system_pods.go:61] "storage-provisioner" [c9bef27e-be02-4cf3-b384-c50517239a0e] Running
	I1229 07:58:00.304445  940463 system_pods.go:74] duration metric: took 11.268005ms to wait for pod list to return data ...
	I1229 07:58:00.304454  940463 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:58:00.313978  940463 default_sa.go:45] found service account: "default"
	I1229 07:58:00.314008  940463 default_sa.go:55] duration metric: took 9.547066ms for default service account to be created ...
	I1229 07:58:00.314019  940463 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:58:00.331486  940463 system_pods.go:86] 8 kube-system pods found
	I1229 07:58:00.331527  940463 system_pods.go:89] "coredns-7d764666f9-pcpsf" [2a4096fa-c1ce-4b41-b821-9a1a1e27d301] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:58:00.331541  940463 system_pods.go:89] "etcd-embed-certs-664524" [cfe4eb21-b2cd-4115-9ffe-603e2cbf0c30] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:58:00.331548  940463 system_pods.go:89] "kindnet-c8jvc" [103ef0e3-8d0d-4bbc-a430-92b7c002114c] Running
	I1229 07:58:00.331556  940463 system_pods.go:89] "kube-apiserver-embed-certs-664524" [067d055c-671c-4e2c-a927-efc4fbb42836] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:58:00.331563  940463 system_pods.go:89] "kube-controller-manager-embed-certs-664524" [70842a82-e0c6-44df-93b6-46c73923df5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:58:00.331569  940463 system_pods.go:89] "kube-proxy-tdfv8" [69aab033-b5bf-44e8-96c0-20f616cd77a2] Running
	I1229 07:58:00.331576  940463 system_pods.go:89] "kube-scheduler-embed-certs-664524" [f9ed8fe2-ebf0-47ea-9b7e-a533fc97aedb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:58:00.331582  940463 system_pods.go:89] "storage-provisioner" [c9bef27e-be02-4cf3-b384-c50517239a0e] Running
	I1229 07:58:00.331591  940463 system_pods.go:126] duration metric: took 17.564708ms to wait for k8s-apps to be running ...
	I1229 07:58:00.331600  940463 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:58:00.331665  940463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:58:00.361340  940463 system_svc.go:56] duration metric: took 29.729187ms WaitForService to wait for kubelet
	I1229 07:58:00.361375  940463 kubeadm.go:587] duration metric: took 6.984909163s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:58:00.361399  940463 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:58:00.365714  940463 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:58:00.365744  940463 node_conditions.go:123] node cpu capacity is 2
	I1229 07:58:00.365760  940463 node_conditions.go:105] duration metric: took 4.355413ms to run NodePressure ...
	I1229 07:58:00.365775  940463 start.go:242] waiting for startup goroutines ...
	I1229 07:58:00.365782  940463 start.go:247] waiting for cluster config update ...
	I1229 07:58:00.365794  940463 start.go:256] writing updated cluster config ...
	I1229 07:58:00.366098  940463 ssh_runner.go:195] Run: rm -f paused
	I1229 07:58:00.371665  940463 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:58:00.380846  940463 pod_ready.go:83] waiting for pod "coredns-7d764666f9-pcpsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:00.444902  943004 out.go:179] * [default-k8s-diff-port-358521] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:58:00.448159  943004 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:58:00.448244  943004 notify.go:221] Checking for updates...
	I1229 07:58:00.454342  943004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:58:00.457488  943004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:58:00.460638  943004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:58:00.463802  943004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:58:00.466952  943004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:58:00.470656  943004 config.go:182] Loaded profile config "embed-certs-664524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:58:00.470849  943004 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:58:00.520443  943004 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:58:00.520565  943004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:58:00.587828  943004 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:58:00.575784205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:58:00.587965  943004 docker.go:319] overlay module found
	I1229 07:58:00.591193  943004 out.go:179] * Using the docker driver based on user configuration
	I1229 07:58:00.594209  943004 start.go:309] selected driver: docker
	I1229 07:58:00.594268  943004 start.go:928] validating driver "docker" against <nil>
	I1229 07:58:00.594285  943004 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:58:00.595161  943004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:58:00.703379  943004 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:58:00.692611373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:58:00.703676  943004 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:58:00.703901  943004 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:58:00.706980  943004 out.go:179] * Using Docker driver with root privileges
	I1229 07:58:00.712172  943004 cni.go:84] Creating CNI manager for ""
	I1229 07:58:00.712247  943004 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:58:00.712264  943004 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:58:00.712431  943004 start.go:353] cluster config:
	{Name:default-k8s-diff-port-358521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:58:00.715708  943004 out.go:179] * Starting "default-k8s-diff-port-358521" primary control-plane node in "default-k8s-diff-port-358521" cluster
	I1229 07:58:00.718567  943004 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:58:00.721415  943004 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:58:00.724326  943004 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:58:00.724379  943004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:58:00.724407  943004 cache.go:65] Caching tarball of preloaded images
	I1229 07:58:00.724419  943004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:58:00.724505  943004 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:58:00.724516  943004 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:58:00.724644  943004 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/config.json ...
	I1229 07:58:00.724679  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/config.json: {Name:mk16cba71448c6c5e019904381a1c110a56ee972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:00.745476  943004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:58:00.745503  943004 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:58:00.745525  943004 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:58:00.745557  943004 start.go:360] acquireMachinesLock for default-k8s-diff-port-358521: {Name:mk006058b00af6cad673fa6451223fba67b65954 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:58:00.745673  943004 start.go:364] duration metric: took 93.35µs to acquireMachinesLock for "default-k8s-diff-port-358521"
	I1229 07:58:00.745704  943004 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-358521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:58:00.745775  943004 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:58:00.749382  943004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:58:00.749632  943004 start.go:159] libmachine.API.Create for "default-k8s-diff-port-358521" (driver="docker")
	I1229 07:58:00.749674  943004 client.go:173] LocalClient.Create starting
	I1229 07:58:00.749752  943004 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 07:58:00.749795  943004 main.go:144] libmachine: Decoding PEM data...
	I1229 07:58:00.749815  943004 main.go:144] libmachine: Parsing certificate...
	I1229 07:58:00.749875  943004 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 07:58:00.749898  943004 main.go:144] libmachine: Decoding PEM data...
	I1229 07:58:00.749911  943004 main.go:144] libmachine: Parsing certificate...
	I1229 07:58:00.750296  943004 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-358521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:58:00.774890  943004 cli_runner.go:211] docker network inspect default-k8s-diff-port-358521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:58:00.774971  943004 network_create.go:284] running [docker network inspect default-k8s-diff-port-358521] to gather additional debugging logs...
	I1229 07:58:00.774991  943004 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-358521
	W1229 07:58:00.796866  943004 cli_runner.go:211] docker network inspect default-k8s-diff-port-358521 returned with exit code 1
	I1229 07:58:00.796899  943004 network_create.go:287] error running [docker network inspect default-k8s-diff-port-358521]: docker network inspect default-k8s-diff-port-358521: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-358521 not found
	I1229 07:58:00.796913  943004 network_create.go:289] output of [docker network inspect default-k8s-diff-port-358521]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-358521 not found
	
	** /stderr **
	I1229 07:58:00.797030  943004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:58:00.816910  943004 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
	I1229 07:58:00.817322  943004 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce89f48ecd30 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:ee:74:81:93:1a} reservation:<nil>}
	I1229 07:58:00.817647  943004 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e2ff2b4ebfe IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:c7:24:a7:9b:5d} reservation:<nil>}
	I1229 07:58:00.818077  943004 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a054f0}
	I1229 07:58:00.818098  943004 network_create.go:124] attempt to create docker network default-k8s-diff-port-358521 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1229 07:58:00.818171  943004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-358521 default-k8s-diff-port-358521
	I1229 07:58:00.887552  943004 network_create.go:108] docker network default-k8s-diff-port-358521 192.168.76.0/24 created
	I1229 07:58:00.887589  943004 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-358521" container
	I1229 07:58:00.887682  943004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:58:00.905823  943004 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-358521 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-358521 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:58:00.934506  943004 oci.go:103] Successfully created a docker volume default-k8s-diff-port-358521
	I1229 07:58:00.934595  943004 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-358521-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-358521 --entrypoint /usr/bin/test -v default-k8s-diff-port-358521:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:58:01.550644  943004 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-358521
	I1229 07:58:01.550714  943004 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:58:01.550729  943004 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:58:01.550792  943004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-358521:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	W1229 07:58:02.387676  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:04.388571  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	I1229 07:58:06.491301  943004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-358521:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (4.940465298s)
	I1229 07:58:06.491329  943004 kic.go:203] duration metric: took 4.940597254s to extract preloaded images to volume ...
	W1229 07:58:06.491474  943004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:58:06.491568  943004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:58:06.584210  943004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-358521 --name default-k8s-diff-port-358521 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-358521 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-358521 --network default-k8s-diff-port-358521 --ip 192.168.76.2 --volume default-k8s-diff-port-358521:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:58:07.077008  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Running}}
	I1229 07:58:07.099204  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:58:07.118032  943004 cli_runner.go:164] Run: docker exec default-k8s-diff-port-358521 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:58:07.193498  943004 oci.go:144] the created container "default-k8s-diff-port-358521" has a running status.
	I1229 07:58:07.193535  943004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa...
	I1229 07:58:07.969565  943004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:58:07.990740  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:58:08.021883  943004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:58:08.021916  943004 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-358521 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:58:08.094410  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:58:08.115028  943004 machine.go:94] provisionDockerMachine start ...
	I1229 07:58:08.115210  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:08.138271  943004 main.go:144] libmachine: Using SSH client type: native
	I1229 07:58:08.138615  943004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33843 <nil> <nil>}
	I1229 07:58:08.138624  943004 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:58:08.140763  943004 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1229 07:58:06.901277  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:09.387888  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	I1229 07:58:11.308918  943004 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-358521
	
	I1229 07:58:11.308991  943004 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-358521"
	I1229 07:58:11.309094  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:11.328937  943004 main.go:144] libmachine: Using SSH client type: native
	I1229 07:58:11.329286  943004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33843 <nil> <nil>}
	I1229 07:58:11.329303  943004 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-358521 && echo "default-k8s-diff-port-358521" | sudo tee /etc/hostname
	I1229 07:58:11.499633  943004 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-358521
	
	I1229 07:58:11.499815  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:11.525765  943004 main.go:144] libmachine: Using SSH client type: native
	I1229 07:58:11.526076  943004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33843 <nil> <nil>}
	I1229 07:58:11.526105  943004 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-358521' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-358521/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-358521' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:58:11.697847  943004 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:58:11.697884  943004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:58:11.697933  943004 ubuntu.go:190] setting up certificates
	I1229 07:58:11.697948  943004 provision.go:84] configureAuth start
	I1229 07:58:11.698017  943004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-358521
	I1229 07:58:11.732076  943004 provision.go:143] copyHostCerts
	I1229 07:58:11.732480  943004 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:58:11.732500  943004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:58:11.732679  943004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:58:11.732919  943004 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:58:11.732930  943004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:58:11.732982  943004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:58:11.733068  943004 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:58:11.733075  943004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:58:11.733110  943004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:58:11.733173  943004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-358521 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-358521 localhost minikube]
	I1229 07:58:11.860396  943004 provision.go:177] copyRemoteCerts
	I1229 07:58:11.860481  943004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:58:11.860524  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:11.890136  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:12.007620  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1229 07:58:12.038619  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:58:12.064545  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:58:12.090417  943004 provision.go:87] duration metric: took 392.44628ms to configureAuth
	I1229 07:58:12.090460  943004 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:58:12.090670  943004 config.go:182] Loaded profile config "default-k8s-diff-port-358521": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:58:12.090800  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:12.114793  943004 main.go:144] libmachine: Using SSH client type: native
	I1229 07:58:12.115111  943004 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33843 <nil> <nil>}
	I1229 07:58:12.115131  943004 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:58:12.452884  943004 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:58:12.452911  943004 machine.go:97] duration metric: took 4.33786402s to provisionDockerMachine
	I1229 07:58:12.452928  943004 client.go:176] duration metric: took 11.703236565s to LocalClient.Create
	I1229 07:58:12.452943  943004 start.go:167] duration metric: took 11.703313473s to libmachine.API.Create "default-k8s-diff-port-358521"
	I1229 07:58:12.452953  943004 start.go:293] postStartSetup for "default-k8s-diff-port-358521" (driver="docker")
	I1229 07:58:12.452967  943004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:58:12.453041  943004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:58:12.453102  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:12.478030  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:12.594091  943004 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:58:12.598093  943004 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:58:12.598132  943004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:58:12.598143  943004 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:58:12.598204  943004 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:58:12.598296  943004 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:58:12.598408  943004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:58:12.606086  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:58:12.623611  943004 start.go:296] duration metric: took 170.642286ms for postStartSetup
	I1229 07:58:12.623976  943004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-358521
	I1229 07:58:12.643929  943004 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/config.json ...
	I1229 07:58:12.644239  943004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:58:12.644330  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:12.668337  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:12.779725  943004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:58:12.785945  943004 start.go:128] duration metric: took 12.040148928s to createHost
	I1229 07:58:12.785972  943004 start.go:83] releasing machines lock for "default-k8s-diff-port-358521", held for 12.040283657s
	I1229 07:58:12.786053  943004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-358521
	I1229 07:58:12.821176  943004 ssh_runner.go:195] Run: cat /version.json
	I1229 07:58:12.821230  943004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:58:12.821238  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:12.821380  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:12.867102  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:12.882426  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:13.104207  943004 ssh_runner.go:195] Run: systemctl --version
	I1229 07:58:13.117643  943004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:58:13.185682  943004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:58:13.190571  943004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:58:13.190669  943004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:58:13.225196  943004 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:58:13.225239  943004 start.go:496] detecting cgroup driver to use...
	I1229 07:58:13.225271  943004 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:58:13.225324  943004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:58:13.249725  943004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:58:13.270584  943004 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:58:13.270660  943004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:58:13.296068  943004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:58:13.322076  943004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:58:13.524060  943004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:58:13.700001  943004 docker.go:234] disabling docker service ...
	I1229 07:58:13.700079  943004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:58:13.732720  943004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:58:13.747648  943004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:58:13.904095  943004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:58:14.071722  943004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:58:14.088344  943004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:58:14.108301  943004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:58:14.108422  943004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.117994  943004 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:58:14.118121  943004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.130583  943004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.143583  943004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.157424  943004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:58:14.177211  943004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.186698  943004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.209360  943004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:58:14.219410  943004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:58:14.228338  943004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:58:14.237960  943004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:58:14.395556  943004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:58:14.844715  943004 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:58:14.844884  943004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:58:14.851930  943004 start.go:574] Will wait 60s for crictl version
	I1229 07:58:14.852051  943004 ssh_runner.go:195] Run: which crictl
	I1229 07:58:14.863775  943004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:58:14.904608  943004 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:58:14.904763  943004 ssh_runner.go:195] Run: crio --version
	I1229 07:58:14.933035  943004 ssh_runner.go:195] Run: crio --version
	I1229 07:58:14.966895  943004 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:58:14.969878  943004 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-358521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:58:14.986081  943004 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1229 07:58:14.989978  943004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:58:14.999784  943004 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-358521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:58:14.999915  943004 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:58:14.999976  943004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:58:15.048428  943004 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:58:15.048455  943004 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:58:15.048514  943004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:58:15.076753  943004 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:58:15.076777  943004 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:58:15.076819  943004 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.35.0 crio true true} ...
	I1229 07:58:15.076921  943004 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-358521 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:58:15.077005  943004 ssh_runner.go:195] Run: crio config
	I1229 07:58:15.144559  943004 cni.go:84] Creating CNI manager for ""
	I1229 07:58:15.144593  943004 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:58:15.144610  943004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:58:15.144635  943004 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-358521 NodeName:default-k8s-diff-port-358521 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:58:15.144826  943004 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-358521"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:58:15.144912  943004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:58:15.153017  943004 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:58:15.153096  943004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:58:15.161106  943004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1229 07:58:15.175332  943004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:58:15.188449  943004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1229 07:58:15.201730  943004 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:58:15.205439  943004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:58:15.215071  943004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:58:15.335456  943004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:58:15.354079  943004 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521 for IP: 192.168.76.2
	I1229 07:58:15.354146  943004 certs.go:195] generating shared ca certs ...
	I1229 07:58:15.354177  943004 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.354351  943004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:58:15.354433  943004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:58:15.354459  943004 certs.go:257] generating profile certs ...
	I1229 07:58:15.354533  943004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.key
	I1229 07:58:15.354569  943004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt with IP's: []
	W1229 07:58:11.897225  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:14.386904  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	I1229 07:58:15.587271  943004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt ...
	I1229 07:58:15.587307  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: {Name:mk00149d860648af75f7db82a1f663a0e5cf0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.587520  943004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.key ...
	I1229 07:58:15.587537  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.key: {Name:mk2b1a3440df07047bf5a540398a51c93c591f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.587634  943004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key.22f9918a
	I1229 07:58:15.587652  943004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.crt.22f9918a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1229 07:58:15.826613  943004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.crt.22f9918a ...
	I1229 07:58:15.826646  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.crt.22f9918a: {Name:mkc4c9e2f2b281bef7b33b743b8595fadd4b3579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.828224  943004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key.22f9918a ...
	I1229 07:58:15.828301  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key.22f9918a: {Name:mkce1814bd12f69c9167052b091b402d53c4dfbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.828558  943004 certs.go:382] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.crt.22f9918a -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.crt
	I1229 07:58:15.828728  943004 certs.go:386] copying /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key.22f9918a -> /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key
	I1229 07:58:15.828891  943004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.key
	I1229 07:58:15.828950  943004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.crt with IP's: []
	I1229 07:58:15.939481  943004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.crt ...
	I1229 07:58:15.939514  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.crt: {Name:mkc3c4f5cf1af7486aea1e8d1c97a70f236aa904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.939707  943004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.key ...
	I1229 07:58:15.939722  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.key: {Name:mkdb4eb6351e76c37d03a5beb250f62df8a8e7ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:15.939932  943004 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:58:15.939983  943004 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:58:15.940006  943004 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:58:15.940035  943004 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:58:15.940062  943004 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:58:15.940088  943004 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:58:15.940141  943004 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:58:15.940720  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:58:15.963286  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:58:15.985567  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:58:16.007580  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:58:16.029142  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1229 07:58:16.048015  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:58:16.067950  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:58:16.086974  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:58:16.111058  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:58:16.128678  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:58:16.146871  943004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:58:16.165808  943004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:58:16.179408  943004 ssh_runner.go:195] Run: openssl version
	I1229 07:58:16.185694  943004 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:58:16.193807  943004 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:58:16.202587  943004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:58:16.207329  943004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:58:16.207443  943004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:58:16.251291  943004 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:58:16.259269  943004 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:58:16.266857  943004 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:58:16.275562  943004 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:58:16.283276  943004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:58:16.287281  943004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:58:16.287363  943004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:58:16.329530  943004 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:58:16.337532  943004 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/741854.pem /etc/ssl/certs/51391683.0
	I1229 07:58:16.345178  943004 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:58:16.353009  943004 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:58:16.360946  943004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:58:16.364954  943004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:58:16.365072  943004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:58:16.407945  943004 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:58:16.415798  943004 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7418542.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:58:16.423407  943004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:58:16.427007  943004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:58:16.427061  943004 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-358521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:58:16.427155  943004 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:58:16.427218  943004 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:58:16.455848  943004 cri.go:96] found id: ""
	I1229 07:58:16.455922  943004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:58:16.463571  943004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:58:16.471468  943004 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:58:16.471534  943004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:58:16.479988  943004 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:58:16.480060  943004 kubeadm.go:158] found existing configuration files:
	
	I1229 07:58:16.480144  943004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1229 07:58:16.488144  943004 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:58:16.488220  943004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:58:16.495483  943004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1229 07:58:16.503026  943004 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:58:16.503096  943004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:58:16.510687  943004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1229 07:58:16.518957  943004 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:58:16.519073  943004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:58:16.527033  943004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1229 07:58:16.534806  943004 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:58:16.534921  943004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:58:16.542420  943004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:58:16.579276  943004 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:58:16.579441  943004 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:58:16.656618  943004 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:58:16.656707  943004 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:58:16.656756  943004 kubeadm.go:319] OS: Linux
	I1229 07:58:16.656876  943004 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:58:16.656978  943004 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:58:16.657064  943004 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:58:16.657148  943004 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:58:16.657255  943004 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:58:16.657326  943004 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:58:16.657400  943004 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:58:16.657469  943004 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:58:16.657539  943004 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:58:16.733879  943004 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:58:16.734054  943004 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:58:16.734180  943004 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:58:16.745194  943004 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:58:16.751508  943004 out.go:252]   - Generating certificates and keys ...
	I1229 07:58:16.751686  943004 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:58:16.751827  943004 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:58:17.023503  943004 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:58:17.710826  943004 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:58:18.569063  943004 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:58:18.683921  943004 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:58:18.798531  943004 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:58:18.798935  943004 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-358521 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:58:18.991076  943004 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:58:18.991381  943004 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-358521 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:58:19.441547  943004 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:58:19.574253  943004 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:58:19.815013  943004 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:58:19.815332  943004 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:58:20.186388  943004 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	W1229 07:58:16.387867  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:18.390453  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	I1229 07:58:20.478974  943004 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:58:20.666938  943004 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:58:20.886606  943004 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:58:21.205559  943004 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:58:21.206321  943004 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:58:21.210696  943004 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:58:21.214094  943004 out.go:252]   - Booting up control plane ...
	I1229 07:58:21.214200  943004 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:58:21.214284  943004 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:58:21.214355  943004 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:58:21.231138  943004 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:58:21.231432  943004 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:58:21.240121  943004 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:58:21.240466  943004 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:58:21.240653  943004 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:58:21.375181  943004 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:58:21.375306  943004 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:58:22.377017  943004 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001932806s
	I1229 07:58:22.381484  943004 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1229 07:58:22.383193  943004 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1229 07:58:22.383302  943004 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1229 07:58:22.383386  943004 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1229 07:58:23.892762  943004 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.509987181s
	W1229 07:58:20.888722  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:22.889215  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:25.387364  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	I1229 07:58:25.833617  943004 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.451321587s
	I1229 07:58:27.887513  943004 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.504065067s
	I1229 07:58:27.922449  943004 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:58:27.939001  943004 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:58:27.954876  943004 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:58:27.955113  943004 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-358521 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:58:27.968710  943004 kubeadm.go:319] [bootstrap-token] Using token: 9qe0ci.3tjt6vxlx8dvskrk
	I1229 07:58:27.971888  943004 out.go:252]   - Configuring RBAC rules ...
	I1229 07:58:27.972012  943004 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:58:27.976095  943004 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:58:27.984741  943004 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:58:27.991113  943004 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:58:27.995125  943004 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:58:27.998970  943004 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:58:28.297496  943004 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:58:28.721120  943004 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:58:29.301866  943004 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:58:29.303114  943004 kubeadm.go:319] 
	I1229 07:58:29.303193  943004 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:58:29.303201  943004 kubeadm.go:319] 
	I1229 07:58:29.303278  943004 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:58:29.303285  943004 kubeadm.go:319] 
	I1229 07:58:29.303310  943004 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:58:29.303374  943004 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:58:29.303428  943004 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:58:29.303436  943004 kubeadm.go:319] 
	I1229 07:58:29.303496  943004 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:58:29.303505  943004 kubeadm.go:319] 
	I1229 07:58:29.303553  943004 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:58:29.303561  943004 kubeadm.go:319] 
	I1229 07:58:29.303612  943004 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:58:29.303691  943004 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:58:29.303764  943004 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:58:29.303772  943004 kubeadm.go:319] 
	I1229 07:58:29.303858  943004 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:58:29.303938  943004 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:58:29.303946  943004 kubeadm.go:319] 
	I1229 07:58:29.304030  943004 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 9qe0ci.3tjt6vxlx8dvskrk \
	I1229 07:58:29.304137  943004 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 \
	I1229 07:58:29.304160  943004 kubeadm.go:319] 	--control-plane 
	I1229 07:58:29.304175  943004 kubeadm.go:319] 
	I1229 07:58:29.304259  943004 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:58:29.304267  943004 kubeadm.go:319] 
	I1229 07:58:29.304350  943004 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 9qe0ci.3tjt6vxlx8dvskrk \
	I1229 07:58:29.304456  943004 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 
	I1229 07:58:29.308526  943004 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:58:29.308961  943004 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:58:29.309075  943004 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:58:29.309095  943004 cni.go:84] Creating CNI manager for ""
	I1229 07:58:29.309104  943004 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:58:29.312262  943004 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:58:29.315167  943004 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:58:29.319557  943004 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:58:29.319575  943004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:58:29.340518  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:58:29.726708  943004 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:58:29.726845  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:29.726910  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-358521 minikube.k8s.io/updated_at=2025_12_29T07_58_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=default-k8s-diff-port-358521 minikube.k8s.io/primary=true
	I1229 07:58:30.085881  943004 ops.go:34] apiserver oom_adj: -16
	I1229 07:58:30.086004  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1229 07:58:27.887040  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	W1229 07:58:29.891974  940463 pod_ready.go:104] pod "coredns-7d764666f9-pcpsf" is not "Ready", error: <nil>
	I1229 07:58:31.886582  940463 pod_ready.go:94] pod "coredns-7d764666f9-pcpsf" is "Ready"
	I1229 07:58:31.886610  940463 pod_ready.go:86] duration metric: took 31.505734496s for pod "coredns-7d764666f9-pcpsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:31.889695  940463 pod_ready.go:83] waiting for pod "etcd-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:31.894744  940463 pod_ready.go:94] pod "etcd-embed-certs-664524" is "Ready"
	I1229 07:58:31.894771  940463 pod_ready.go:86] duration metric: took 5.047636ms for pod "etcd-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:31.897365  940463 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:31.902672  940463 pod_ready.go:94] pod "kube-apiserver-embed-certs-664524" is "Ready"
	I1229 07:58:31.902702  940463 pod_ready.go:86] duration metric: took 5.313837ms for pod "kube-apiserver-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:31.905185  940463 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:32.084839  940463 pod_ready.go:94] pod "kube-controller-manager-embed-certs-664524" is "Ready"
	I1229 07:58:32.084869  940463 pod_ready.go:86] duration metric: took 179.65786ms for pod "kube-controller-manager-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:32.285272  940463 pod_ready.go:83] waiting for pod "kube-proxy-tdfv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:32.684541  940463 pod_ready.go:94] pod "kube-proxy-tdfv8" is "Ready"
	I1229 07:58:32.684573  940463 pod_ready.go:86] duration metric: took 399.271615ms for pod "kube-proxy-tdfv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:32.884898  940463 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:33.284829  940463 pod_ready.go:94] pod "kube-scheduler-embed-certs-664524" is "Ready"
	I1229 07:58:33.284854  940463 pod_ready.go:86] duration metric: took 399.931428ms for pod "kube-scheduler-embed-certs-664524" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:58:33.284865  940463 pod_ready.go:40] duration metric: took 32.913153665s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:58:33.375036  940463 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1229 07:58:33.378202  940463 out.go:203] 
	W1229 07:58:33.381074  940463 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1229 07:58:33.384150  940463 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:58:33.387149  940463 out.go:179] * Done! kubectl is now configured to use "embed-certs-664524" cluster and "default" namespace by default
	I1229 07:58:30.586126  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:31.086398  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:31.586704  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:32.086008  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:32.586534  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:33.086180  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:33.586771  943004 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:58:33.730886  943004 kubeadm.go:1114] duration metric: took 4.004092687s to wait for elevateKubeSystemPrivileges
	I1229 07:58:33.730915  943004 kubeadm.go:403] duration metric: took 17.303860187s to StartCluster
	I1229 07:58:33.730933  943004 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:33.730990  943004 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:58:33.732462  943004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:33.732708  943004 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:58:33.732995  943004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:58:33.733273  943004 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:58:33.733357  943004 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-358521"
	I1229 07:58:33.733372  943004 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-358521"
	I1229 07:58:33.733397  943004 host.go:66] Checking if "default-k8s-diff-port-358521" exists ...
	I1229 07:58:33.733440  943004 config.go:182] Loaded profile config "default-k8s-diff-port-358521": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:58:33.733480  943004 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-358521"
	I1229 07:58:33.733496  943004 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-358521"
	I1229 07:58:33.734013  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:58:33.734490  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:58:33.741149  943004 out.go:179] * Verifying Kubernetes components...
	I1229 07:58:33.747420  943004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:58:33.787006  943004 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-358521"
	I1229 07:58:33.787051  943004 host.go:66] Checking if "default-k8s-diff-port-358521" exists ...
	I1229 07:58:33.787629  943004 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:58:33.837845  943004 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:58:33.840882  943004 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:58:33.840912  943004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:58:33.840982  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:33.926385  943004 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:58:33.926407  943004 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:58:33.926466  943004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:58:33.932979  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:33.966278  943004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33843 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:58:34.198864  943004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:58:34.207212  943004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:58:34.248907  943004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:58:34.339319  943004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:58:34.604281  943004 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1229 07:58:34.606811  943004 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-358521" to be "Ready" ...
	I1229 07:58:35.111322  943004 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-358521" context rescaled to 1 replicas
	I1229 07:58:35.216860  943004 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:58:35.219707  943004 addons.go:530] duration metric: took 1.486426611s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1229 07:58:36.616256  943004 node_ready.go:57] node "default-k8s-diff-port-358521" has "Ready":"False" status (will retry)
	W1229 07:58:39.110511  943004 node_ready.go:57] node "default-k8s-diff-port-358521" has "Ready":"False" status (will retry)
	W1229 07:58:41.609544  943004 node_ready.go:57] node "default-k8s-diff-port-358521" has "Ready":"False" status (will retry)
	W1229 07:58:43.610193  943004 node_ready.go:57] node "default-k8s-diff-port-358521" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 29 07:58:26 embed-certs-664524 crio[663]: time="2025-12-29T07:58:26.855750374Z" level=info msg="Error loading conmon cgroup of container 278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276: cgroup deleted" id=459f23b7-b87a-4bcf-ba67-493e009835ba name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:58:26 embed-certs-664524 crio[663]: time="2025-12-29T07:58:26.861407647Z" level=info msg="Removed container 278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w/dashboard-metrics-scraper" id=459f23b7-b87a-4bcf-ba67-493e009835ba name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 07:58:29 embed-certs-664524 conmon[1197]: conmon 7539540a5a2b81108d15 <ninfo>: container 1200 exited with status 1
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.870249351Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b13faf0b-e67b-405d-8bfd-df1cddcf5f49 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.871513361Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=69b84f50-1b4c-42d1-a5e8-7a502a37062f name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.887916733Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=92f71636-197c-4f49-8bc1-a68756c4baf9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.888037407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.89523948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.895418771Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2b3c184c68a14dfb88ffb1870a95ccbf75e3ffe55727bb2e48330e3e10c4315c/merged/etc/passwd: no such file or directory"
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.895441401Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2b3c184c68a14dfb88ffb1870a95ccbf75e3ffe55727bb2e48330e3e10c4315c/merged/etc/group: no such file or directory"
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.895689715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.925662871Z" level=info msg="Created container a1e706703f3776fe261699567df5bdac64ac355c0c33633f5f097336d36f805c: kube-system/storage-provisioner/storage-provisioner" id=92f71636-197c-4f49-8bc1-a68756c4baf9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.92792153Z" level=info msg="Starting container: a1e706703f3776fe261699567df5bdac64ac355c0c33633f5f097336d36f805c" id=304e31a4-fcb4-4def-8579-0691eedc670d name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:58:29 embed-certs-664524 crio[663]: time="2025-12-29T07:58:29.930102904Z" level=info msg="Started container" PID=1685 containerID=a1e706703f3776fe261699567df5bdac64ac355c0c33633f5f097336d36f805c description=kube-system/storage-provisioner/storage-provisioner id=304e31a4-fcb4-4def-8579-0691eedc670d name=/runtime.v1.RuntimeService/StartContainer sandboxID=e091512d07293f5279f4e14540f0606be0f0237f7ce6a97d228d6dea4276155e
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.496812505Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.496860153Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.501627147Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.501783111Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.506318243Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.506354042Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.510649877Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.510685291Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.510712245Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.514899682Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 07:58:39 embed-certs-664524 crio[663]: time="2025-12-29T07:58:39.514938181Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a1e706703f377       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   e091512d07293       storage-provisioner                          kube-system
	7c126b6a48dd7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   13821135b2816       dashboard-metrics-scraper-867fb5f87b-7cf5w   kubernetes-dashboard
	924f3c959e2ad       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago      Running             kubernetes-dashboard        0                   79da333ad802d       kubernetes-dashboard-b84665fb8-brfwf         kubernetes-dashboard
	dc3bc9b218eef       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           51 seconds ago      Running             coredns                     1                   fb7afb134b053       coredns-7d764666f9-pcpsf                     kube-system
	7539540a5a2b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   e091512d07293       storage-provisioner                          kube-system
	eac02079d70a1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   dac163038ba64       busybox                                      default
	482c99472f92c       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           51 seconds ago      Running             kindnet-cni                 1                   8ff92f554b20f       kindnet-c8jvc                                kube-system
	b01da39edeea7       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           52 seconds ago      Running             kube-proxy                  1                   fc65016b801ea       kube-proxy-tdfv8                             kube-system
	654e52c22fa70       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           57 seconds ago      Running             kube-apiserver              1                   2798bc4a2bec4       kube-apiserver-embed-certs-664524            kube-system
	d33e8a566872e       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           57 seconds ago      Running             etcd                        1                   db2bc6e357912       etcd-embed-certs-664524                      kube-system
	f4bd112a2b0e2       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           57 seconds ago      Running             kube-controller-manager     1                   ddf34b253e939       kube-controller-manager-embed-certs-664524   kube-system
	7de23a4601038       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           57 seconds ago      Running             kube-scheduler              1                   26bf7c0fcc110       kube-scheduler-embed-certs-664524            kube-system
	
	
	==> coredns [dc3bc9b218eefd1dd5a0fb53fa0c8e4db9142d53881da74343b285d0d05b6758] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46328 - 14024 "HINFO IN 9059781156673576252.6334997300942418132. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.067086486s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-664524
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-664524
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=embed-certs-664524
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_57_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:56:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-664524
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:58:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:58:28 +0000   Mon, 29 Dec 2025 07:56:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:58:28 +0000   Mon, 29 Dec 2025 07:56:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:58:28 +0000   Mon, 29 Dec 2025 07:56:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:58:28 +0000   Mon, 29 Dec 2025 07:57:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-664524
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                aa87aa83-06f4-4a87-8b15-861c15da3cbf
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-7d764666f9-pcpsf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-embed-certs-664524                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         110s
	  kube-system                 kindnet-c8jvc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-664524             250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-664524    200m (10%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-tdfv8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-664524             100m (5%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7cf5w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-brfwf          0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  106s  node-controller  Node embed-certs-664524 event: Registered Node embed-certs-664524 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node embed-certs-664524 event: Registered Node embed-certs-664524 in Controller
	
	
	==> dmesg <==
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	[Dec29 07:54] overlayfs: idmapped layers are currently not supported
	[Dec29 07:55] overlayfs: idmapped layers are currently not supported
	[Dec29 07:56] overlayfs: idmapped layers are currently not supported
	[Dec29 07:57] overlayfs: idmapped layers are currently not supported
	[Dec29 07:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d33e8a566872e68896c0f3f3ed489834fbd96480e02e43b56550ca6df569cf2e] <==
	{"level":"info","ts":"2025-12-29T07:57:53.428554Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:57:53.428674Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:57:53.447213Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:57:53.447334Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:57:53.424777Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:57:53.449283Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:57:54.153181Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:57:54.153226Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:57:54.153286Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:57:54.153298Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:57:54.153312Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:57:54.154363Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:57:54.154397Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:57:54.154415Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:57:54.154425Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:57:54.157068Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-664524 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:57:54.157227Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:57:54.158141Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:57:54.159927Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:57:54.160805Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:57:54.185791Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:57:54.197865Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:57:54.198561Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:57:54.199296Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"warn","ts":"2025-12-29T07:57:55.123911Z","caller":"embed/config_logging.go:194","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52442","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:52442: read: connection reset by peer"}
	
	
	==> kernel <==
	 07:58:51 up  4:41,  0 user,  load average: 2.83, 2.09, 2.27
	Linux embed-certs-664524 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [482c99472f92c834414b043b80388d6641f45c36fa36354051c42a85cd7b4c40] <==
	I1229 07:57:59.269928       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:57:59.270138       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:57:59.270272       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:57:59.270284       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:57:59.270293       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:57:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:57:59.493628       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:57:59.493660       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:57:59.493671       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:57:59.494137       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1229 07:58:29.483104       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1229 07:58:29.494625       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1229 07:58:29.494737       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1229 07:58:29.506197       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1229 07:58:31.094273       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:58:31.094316       1 metrics.go:72] Registering metrics
	I1229 07:58:31.094380       1 controller.go:711] "Syncing nftables rules"
	I1229 07:58:39.487916       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:58:39.487967       1 main.go:301] handling current node
	I1229 07:58:49.488858       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1229 07:58:49.488996       1 main.go:301] handling current node
	
	
	==> kube-apiserver [654e52c22fa70936846b530c44f221ec218a752ca1cc7140e9422de0ec6628d3] <==
	I1229 07:57:58.077044       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:58.084181       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1229 07:57:58.084399       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1229 07:57:58.084408       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1229 07:57:58.110100       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1229 07:57:58.156224       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:57:58.172098       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:57:58.173423       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:57:58.183726       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:57:58.183862       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:57:58.183945       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1229 07:57:58.184319       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:57:58.197598       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:57:58.197707       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:57:58.393374       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:57:58.697525       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:57:59.477985       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:57:59.660173       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:57:59.799059       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:57:59.854030       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:58:00.179205       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.179.37"}
	I1229 07:58:00.228236       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.174.87"}
	I1229 07:58:01.370885       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:58:01.634815       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:58:01.682188       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f4bd112a2b0e2eb9aca6d9b1d6032fd1d77abfb043c5af20060885edb1a04b53] <==
	I1229 07:58:01.064060       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064072       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064080       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064087       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.070647       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081296       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081556       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081579       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081783       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081838       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081874       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.081920       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.082033       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.082057       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.082324       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064093       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064100       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064106       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.064112       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.100446       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.132057       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.155679       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:01.155711       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:58:01.155718       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:58:01.718201       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [b01da39edeea753e83c2d146e6837a58dff702fec6e5bdb574134e1badac68e8] <==
	I1229 07:57:59.431541       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:57:59.674939       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:57:59.796924       1 shared_informer.go:377] "Caches are synced"
	I1229 07:57:59.796971       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1229 07:57:59.797066       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:58:00.226289       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:58:00.226356       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:58:00.356062       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:58:00.356708       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:58:00.356726       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:58:00.398004       1 config.go:200] "Starting service config controller"
	I1229 07:58:00.484577       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:58:00.456874       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:58:00.484739       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:58:00.456918       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:58:00.484840       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:58:00.484891       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:58:00.465679       1 config.go:309] "Starting node config controller"
	I1229 07:58:00.484956       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:58:00.484991       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:58:00.585481       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:58:00.585487       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7de23a46010383a4bcead63ecc65f9f99043b4ec3bb3d680e0839a2c53de7654] <==
	I1229 07:57:54.605948       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:57:57.453321       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:57:57.453362       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:57:57.453388       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:57:57.453396       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:57:57.946846       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:57:57.971247       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:57:58.063722       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:57:58.069985       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:57:58.070123       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:57:58.070010       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:57:58.181203       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:58:14 embed-certs-664524 kubelet[790]: I1229 07:58:14.813423     790 scope.go:122] "RemoveContainer" containerID="c5d4565f52d427a8b7bd40938f4b8b654e8e7909586140f1f3ad2027a7530b37"
	Dec 29 07:58:15 embed-certs-664524 kubelet[790]: E1229 07:58:15.817093     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" containerName="dashboard-metrics-scraper"
	Dec 29 07:58:15 embed-certs-664524 kubelet[790]: I1229 07:58:15.817621     790 scope.go:122] "RemoveContainer" containerID="278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276"
	Dec 29 07:58:15 embed-certs-664524 kubelet[790]: E1229 07:58:15.817871     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7cf5w_kubernetes-dashboard(bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" podUID="bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b"
	Dec 29 07:58:15 embed-certs-664524 kubelet[790]: I1229 07:58:15.818349     790 scope.go:122] "RemoveContainer" containerID="c5d4565f52d427a8b7bd40938f4b8b654e8e7909586140f1f3ad2027a7530b37"
	Dec 29 07:58:16 embed-certs-664524 kubelet[790]: E1229 07:58:16.820882     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" containerName="dashboard-metrics-scraper"
	Dec 29 07:58:16 embed-certs-664524 kubelet[790]: I1229 07:58:16.820922     790 scope.go:122] "RemoveContainer" containerID="278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276"
	Dec 29 07:58:16 embed-certs-664524 kubelet[790]: E1229 07:58:16.821077     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7cf5w_kubernetes-dashboard(bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" podUID="bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b"
	Dec 29 07:58:22 embed-certs-664524 kubelet[790]: E1229 07:58:22.585851     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" containerName="dashboard-metrics-scraper"
	Dec 29 07:58:22 embed-certs-664524 kubelet[790]: I1229 07:58:22.585903     790 scope.go:122] "RemoveContainer" containerID="278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276"
	Dec 29 07:58:22 embed-certs-664524 kubelet[790]: E1229 07:58:22.586098     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7cf5w_kubernetes-dashboard(bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" podUID="bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b"
	Dec 29 07:58:26 embed-certs-664524 kubelet[790]: E1229 07:58:26.552721     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" containerName="dashboard-metrics-scraper"
	Dec 29 07:58:26 embed-certs-664524 kubelet[790]: I1229 07:58:26.553629     790 scope.go:122] "RemoveContainer" containerID="278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276"
	Dec 29 07:58:26 embed-certs-664524 kubelet[790]: I1229 07:58:26.846359     790 scope.go:122] "RemoveContainer" containerID="278b44a0a6082d223f5937133d39d8d7cd2cb38f2824a2790656ffb574eca276"
	Dec 29 07:58:26 embed-certs-664524 kubelet[790]: E1229 07:58:26.846826     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" containerName="dashboard-metrics-scraper"
	Dec 29 07:58:26 embed-certs-664524 kubelet[790]: I1229 07:58:26.847087     790 scope.go:122] "RemoveContainer" containerID="7c126b6a48dd7c6f938f3cc35158fa83091d03729f40aa2d7f66e9d1798a436a"
	Dec 29 07:58:26 embed-certs-664524 kubelet[790]: E1229 07:58:26.847766     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7cf5w_kubernetes-dashboard(bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" podUID="bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b"
	Dec 29 07:58:29 embed-certs-664524 kubelet[790]: I1229 07:58:29.868291     790 scope.go:122] "RemoveContainer" containerID="7539540a5a2b81108d156800d836cce77a98266438d5637aa05f90fbec56968f"
	Dec 29 07:58:31 embed-certs-664524 kubelet[790]: E1229 07:58:31.743964     790 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-pcpsf" containerName="coredns"
	Dec 29 07:58:32 embed-certs-664524 kubelet[790]: E1229 07:58:32.585303     790 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" containerName="dashboard-metrics-scraper"
	Dec 29 07:58:32 embed-certs-664524 kubelet[790]: I1229 07:58:32.585345     790 scope.go:122] "RemoveContainer" containerID="7c126b6a48dd7c6f938f3cc35158fa83091d03729f40aa2d7f66e9d1798a436a"
	Dec 29 07:58:32 embed-certs-664524 kubelet[790]: E1229 07:58:32.585534     790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7cf5w_kubernetes-dashboard(bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7cf5w" podUID="bfa936fd-3a36-4ced-bed4-baa9e3fe7e9b"
	Dec 29 07:58:45 embed-certs-664524 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:58:46 embed-certs-664524 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:58:46 embed-certs-664524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [924f3c959e2ad4291d69a58bd2fdba2b25cbc103d702c8ba5175b57b4d74da63] <==
	2025/12/29 07:58:09 Using namespace: kubernetes-dashboard
	2025/12/29 07:58:09 Using in-cluster config to connect to apiserver
	2025/12/29 07:58:09 Using secret token for csrf signing
	2025/12/29 07:58:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:58:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:58:09 Successful initial request to the apiserver, version: v1.35.0
	2025/12/29 07:58:09 Generating JWE encryption key
	2025/12/29 07:58:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:58:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:58:10 Initializing JWE encryption key from synchronized object
	2025/12/29 07:58:10 Creating in-cluster Sidecar client
	2025/12/29 07:58:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:58:10 Serving insecurely on HTTP port: 9090
	2025/12/29 07:58:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:58:09 Starting overwatch
	
	
	==> storage-provisioner [7539540a5a2b81108d156800d836cce77a98266438d5637aa05f90fbec56968f] <==
	I1229 07:57:59.456760       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 07:58:29.464339       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a1e706703f3776fe261699567df5bdac64ac355c0c33633f5f097336d36f805c] <==
	I1229 07:58:29.960492       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:58:29.980268       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:58:29.980399       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:58:29.983014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:33.458181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:37.718757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:41.316986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:44.371073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:47.395928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:47.404054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:58:47.404341       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:58:47.404578       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-664524_8e1b0654-ef3f-4bec-aef6-f9ac06a916dc!
	I1229 07:58:47.406366       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3779bf10-bb43-4dfc-bba7-8f9e72c926a6", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-664524_8e1b0654-ef3f-4bec-aef6-f9ac06a916dc became leader
	W1229 07:58:47.418567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:47.423656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:58:47.511914       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-664524_8e1b0654-ef3f-4bec-aef6-f9ac06a916dc!
	W1229 07:58:49.432622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:49.440897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:51.444715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:51.450169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-664524 -n embed-certs-664524
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-664524 -n embed-certs-664524: exit status 2 (380.801411ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-664524 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-358521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-358521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (389.962814ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:58:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-358521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-358521 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-358521 describe deploy/metrics-server -n kube-system: exit status 1 (87.509255ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-358521 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-358521
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-358521:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2",
	        "Created": "2025-12-29T07:58:06.602342108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 943569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:58:06.802557532Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/hosts",
	        "LogPath": "/var/lib/docker/containers/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2-json.log",
	        "Name": "/default-k8s-diff-port-358521",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-358521:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-358521",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2",
	                "LowerDir": "/var/lib/docker/overlay2/294ef4bdf8bbaacd1c175276c7947fe59f88159ed7daca907c8e939607d8433f-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/294ef4bdf8bbaacd1c175276c7947fe59f88159ed7daca907c8e939607d8433f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/294ef4bdf8bbaacd1c175276c7947fe59f88159ed7daca907c8e939607d8433f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/294ef4bdf8bbaacd1c175276c7947fe59f88159ed7daca907c8e939607d8433f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-358521",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-358521/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-358521",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-358521",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-358521",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32fbda8d56075ac962e47b897fc8472d54434b05e85feaff8aab855d6d46cae2",
	            "SandboxKey": "/var/run/docker/netns/32fbda8d5607",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33843"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33844"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33847"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33845"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33846"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-358521": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:2b:7d:0d:7b:a0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "551d951d21be2de7f92b5cdbab7af8ef8b7b661920c518b5b155e983f73740a9",
	                    "EndpointID": "31be12fa7fc06b28981deeea6bb25d6d37ee4b0abc7dc9993561b8786e79a5f2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-358521",
	                        "f07d6857d9ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-358521 -n default-k8s-diff-port-358521
E1229 07:59:00.241966  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-358521 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-358521 logs -n 25: (1.682294141s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-512867                                                                                                                                                                                                                     │ old-k8s-version-512867       │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:54 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:54 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable metrics-server -p no-preload-822299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │                     │
	│ stop    │ -p no-preload-822299 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ addons  │ enable dashboard -p no-preload-822299 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:56 UTC │
	│ image   │ no-preload-822299 image list --format=json                                                                                                                                                                                                    │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ pause   │ -p no-preload-822299 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │                     │
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-664524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │                     │
	│ stop    │ -p embed-certs-664524 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-664524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:58 UTC │
	│ ssh     │ force-systemd-flag-122604 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-122604    │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ delete  │ -p force-systemd-flag-122604                                                                                                                                                                                                                  │ force-systemd-flag-122604    │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ delete  │ -p disable-driver-mounts-062900                                                                                                                                                                                                               │ disable-driver-mounts-062900 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ start   │ -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ image   │ embed-certs-664524 image list --format=json                                                                                                                                                                                                   │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ pause   │ -p embed-certs-664524 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ delete  │ -p embed-certs-664524                                                                                                                                                                                                                         │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ delete  │ -p embed-certs-664524                                                                                                                                                                                                                         │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ start   │ -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-358521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:58:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:58:55.360970  947476 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:58:55.361136  947476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:58:55.361148  947476 out.go:374] Setting ErrFile to fd 2...
	I1229 07:58:55.361173  947476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:58:55.361609  947476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:58:55.362197  947476 out.go:368] Setting JSON to false
	I1229 07:58:55.364057  947476 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16887,"bootTime":1766978248,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:58:55.364157  947476 start.go:143] virtualization:  
	I1229 07:58:55.370957  947476 out.go:179] * [newest-cni-921639] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:58:55.374549  947476 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:58:55.374785  947476 notify.go:221] Checking for updates...
	I1229 07:58:55.382203  947476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:58:55.385504  947476 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:58:55.388848  947476 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:58:55.392085  947476 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:58:55.395406  947476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:58:55.398966  947476 config.go:182] Loaded profile config "default-k8s-diff-port-358521": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:58:55.399079  947476 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:58:55.437100  947476 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:58:55.437325  947476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:58:55.493635  947476 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:58:55.484239764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:58:55.493751  947476 docker.go:319] overlay module found
	I1229 07:58:55.497005  947476 out.go:179] * Using the docker driver based on user configuration
	I1229 07:58:55.499958  947476 start.go:309] selected driver: docker
	I1229 07:58:55.499976  947476 start.go:928] validating driver "docker" against <nil>
	I1229 07:58:55.499989  947476 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:58:55.500732  947476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:58:55.556095  947476 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:58:55.546288614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:58:55.556252  947476 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W1229 07:58:55.556290  947476 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1229 07:58:55.556525  947476 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1229 07:58:55.560016  947476 out.go:179] * Using Docker driver with root privileges
	I1229 07:58:55.562916  947476 cni.go:84] Creating CNI manager for ""
	I1229 07:58:55.562992  947476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:58:55.563013  947476 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:58:55.563099  947476 start.go:353] cluster config:
	{Name:newest-cni-921639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-921639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:58:55.566318  947476 out.go:179] * Starting "newest-cni-921639" primary control-plane node in "newest-cni-921639" cluster
	I1229 07:58:55.569242  947476 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:58:55.572166  947476 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:58:55.575023  947476 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:58:55.575076  947476 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:58:55.575087  947476 cache.go:65] Caching tarball of preloaded images
	I1229 07:58:55.575115  947476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:58:55.575186  947476 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:58:55.575196  947476 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:58:55.575312  947476 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/config.json ...
	I1229 07:58:55.575330  947476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/config.json: {Name:mk78162eb7c68e5b23f26510ad5a1847e20ad33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:58:55.594434  947476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:58:55.594453  947476 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:58:55.594472  947476 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:58:55.594503  947476 start.go:360] acquireMachinesLock for newest-cni-921639: {Name:mkc8472acf578a41625391ce0b888363af994d9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:58:55.594623  947476 start.go:364] duration metric: took 104.132µs to acquireMachinesLock for "newest-cni-921639"
	I1229 07:58:55.594652  947476 start.go:93] Provisioning new machine with config: &{Name:newest-cni-921639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-921639 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:58:55.594727  947476 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:58:55.598191  947476 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:58:55.598437  947476 start.go:159] libmachine.API.Create for "newest-cni-921639" (driver="docker")
	I1229 07:58:55.598490  947476 client.go:173] LocalClient.Create starting
	I1229 07:58:55.598572  947476 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 07:58:55.598611  947476 main.go:144] libmachine: Decoding PEM data...
	I1229 07:58:55.598633  947476 main.go:144] libmachine: Parsing certificate...
	I1229 07:58:55.598699  947476 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 07:58:55.598726  947476 main.go:144] libmachine: Decoding PEM data...
	I1229 07:58:55.598741  947476 main.go:144] libmachine: Parsing certificate...
	I1229 07:58:55.599113  947476 cli_runner.go:164] Run: docker network inspect newest-cni-921639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:58:55.616213  947476 cli_runner.go:211] docker network inspect newest-cni-921639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:58:55.616310  947476 network_create.go:284] running [docker network inspect newest-cni-921639] to gather additional debugging logs...
	I1229 07:58:55.616336  947476 cli_runner.go:164] Run: docker network inspect newest-cni-921639
	W1229 07:58:55.632229  947476 cli_runner.go:211] docker network inspect newest-cni-921639 returned with exit code 1
	I1229 07:58:55.632260  947476 network_create.go:287] error running [docker network inspect newest-cni-921639]: docker network inspect newest-cni-921639: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-921639 not found
	I1229 07:58:55.632274  947476 network_create.go:289] output of [docker network inspect newest-cni-921639]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-921639 not found
	
	** /stderr **
	I1229 07:58:55.632374  947476 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:58:55.649995  947476 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
	I1229 07:58:55.650365  947476 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce89f48ecd30 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:ee:74:81:93:1a} reservation:<nil>}
	I1229 07:58:55.650702  947476 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e2ff2b4ebfe IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:c7:24:a7:9b:5d} reservation:<nil>}
	I1229 07:58:55.651003  947476 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-551d951d21be IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:8f:e0:e3:ee:f1} reservation:<nil>}
	I1229 07:58:55.651515  947476 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a15990}
	I1229 07:58:55.651545  947476 network_create.go:124] attempt to create docker network newest-cni-921639 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 07:58:55.651608  947476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-921639 newest-cni-921639
	I1229 07:58:55.711386  947476 network_create.go:108] docker network newest-cni-921639 192.168.85.0/24 created
	I1229 07:58:55.711423  947476 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-921639" container
	I1229 07:58:55.711498  947476 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:58:55.728705  947476 cli_runner.go:164] Run: docker volume create newest-cni-921639 --label name.minikube.sigs.k8s.io=newest-cni-921639 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:58:55.747787  947476 oci.go:103] Successfully created a docker volume newest-cni-921639
	I1229 07:58:55.747879  947476 cli_runner.go:164] Run: docker run --rm --name newest-cni-921639-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-921639 --entrypoint /usr/bin/test -v newest-cni-921639:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:58:56.319645  947476 oci.go:107] Successfully prepared a docker volume newest-cni-921639
	I1229 07:58:56.319710  947476 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:58:56.319720  947476 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:58:56.319792  947476 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-921639:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:59:00.315052  947476 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-921639:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.995214483s)
	I1229 07:59:00.315087  947476 kic.go:203] duration metric: took 3.995362522s to extract preloaded images to volume ...
	W1229 07:59:00.315242  947476 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:59:00.315364  947476 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	
	
	==> CRI-O <==
	Dec 29 07:58:47 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:47.824312404Z" level=info msg="Starting container: 457ee18cac1b858846d44e75f9eedfe1dc8859ef1c846e21b1b80752448e5b1d" id=bad9f41c-df4e-4a10-ab29-e6539b335605 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:58:47 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:47.838332695Z" level=info msg="Started container" PID=1768 containerID=5aed531a521b8243f1fd0fa9e6b96bf291a99b23a65f3df501c1f9c25f65f772 description=kube-system/coredns-7d764666f9-jv9qg/coredns id=6a66a75b-4dca-41ac-bdf4-6cb5bd1d29c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=167f4e3e09fb96221fadf6c52b8fc0cd0fd48b2a42dbf13c29dead59243e8773
	Dec 29 07:58:47 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:47.838427457Z" level=info msg="Started container" PID=1766 containerID=457ee18cac1b858846d44e75f9eedfe1dc8859ef1c846e21b1b80752448e5b1d description=kube-system/storage-provisioner/storage-provisioner id=bad9f41c-df4e-4a10-ab29-e6539b335605 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ccb222724757f3417b0ac068e97c4d8e81ad0b1fe5aca5afd0896400dd3d0ca
	Dec 29 07:58:51 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:51.555020382Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e98ff6b2-09e7-4076-8809-d5b76c7c2d5d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:58:51 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:51.555105215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:58:51 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:51.561828245Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ca45fb7f7aadb513311646a2e69e4b940e4a4d779d1607345730fe9655b10e0d UID:e1c7f8ff-3089-47c4-82fd-1061811c1b63 NetNS:/var/run/netns/14484d48-93e0-491c-a048-4c0a6fefe86b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40010ba560}] Aliases:map[]}"
	Dec 29 07:58:51 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:51.561887265Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 29 07:58:51 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:51.581969001Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ca45fb7f7aadb513311646a2e69e4b940e4a4d779d1607345730fe9655b10e0d UID:e1c7f8ff-3089-47c4-82fd-1061811c1b63 NetNS:/var/run/netns/14484d48-93e0-491c-a048-4c0a6fefe86b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40010ba560}] Aliases:map[]}"
	Dec 29 07:58:51 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:51.582154126Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 29 07:58:51 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:51.589657502Z" level=info msg="Ran pod sandbox ca45fb7f7aadb513311646a2e69e4b940e4a4d779d1607345730fe9655b10e0d with infra container: default/busybox/POD" id=e98ff6b2-09e7-4076-8809-d5b76c7c2d5d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:58:51 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:51.591828538Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=391b93e0-5fbf-4cea-9ecd-39637dc12264 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:58:51 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:51.592100163Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=391b93e0-5fbf-4cea-9ecd-39637dc12264 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:58:51 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:51.592287389Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=391b93e0-5fbf-4cea-9ecd-39637dc12264 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:58:51 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:51.594721976Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1906c5f4-3b14-4e34-9288-745293b0e62f name=/runtime.v1.ImageService/PullImage
	Dec 29 07:58:51 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:51.595378112Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 29 07:58:53 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:53.47384929Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=1906c5f4-3b14-4e34-9288-745293b0e62f name=/runtime.v1.ImageService/PullImage
	Dec 29 07:58:53 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:53.47450504Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=afd7123c-7469-4179-8144-04a232bda542 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:58:53 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:53.478283531Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a6040d9a-5b3c-4988-9e12-0ad012344e04 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:58:53 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:53.485242328Z" level=info msg="Creating container: default/busybox/busybox" id=00b2fe7f-8790-4315-b0cb-f4bd4fc2775d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:58:53 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:53.485371035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:58:53 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:53.490253812Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:58:53 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:53.490715674Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:58:53 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:53.504766235Z" level=info msg="Created container 7bad593a50ac19ce98e21949edb49667f27d70004afc29eb6148b4373e35fbe2: default/busybox/busybox" id=00b2fe7f-8790-4315-b0cb-f4bd4fc2775d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:58:53 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:53.506119673Z" level=info msg="Starting container: 7bad593a50ac19ce98e21949edb49667f27d70004afc29eb6148b4373e35fbe2" id=99df80ab-f68d-437e-a213-eccded98a749 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:58:53 default-k8s-diff-port-358521 crio[842]: time="2025-12-29T07:58:53.507814594Z" level=info msg="Started container" PID=1833 containerID=7bad593a50ac19ce98e21949edb49667f27d70004afc29eb6148b4373e35fbe2 description=default/busybox/busybox id=99df80ab-f68d-437e-a213-eccded98a749 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca45fb7f7aadb513311646a2e69e4b940e4a4d779d1607345730fe9655b10e0d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	7bad593a50ac1       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   ca45fb7f7aadb       busybox                                                default
	5aed531a521b8       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      13 seconds ago      Running             coredns                   0                   167f4e3e09fb9       coredns-7d764666f9-jv9qg                               kube-system
	457ee18cac1b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   7ccb222724757       storage-provisioner                                    kube-system
	7ef693ae0591c       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    25 seconds ago      Running             kindnet-cni               0                   fcf2f54b6fb94       kindnet-x9pdb                                          kube-system
	fe91e56e19227       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      27 seconds ago      Running             kube-proxy                0                   3f6dae21da78a       kube-proxy-58t84                                       kube-system
	40861b6039bbc       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      38 seconds ago      Running             etcd                      0                   ffa2877277261       etcd-default-k8s-diff-port-358521                      kube-system
	4b4df6c7f3324       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      38 seconds ago      Running             kube-apiserver            0                   8e8507436ab3c       kube-apiserver-default-k8s-diff-port-358521            kube-system
	4eb1dceed9cf9       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      38 seconds ago      Running             kube-scheduler            0                   aaa36a7d62242       kube-scheduler-default-k8s-diff-port-358521            kube-system
	647a3e4cf46ae       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      38 seconds ago      Running             kube-controller-manager   0                   cee57ec7f26e4       kube-controller-manager-default-k8s-diff-port-358521   kube-system
	
	
	==> coredns [5aed531a521b8243f1fd0fa9e6b96bf291a99b23a65f3df501c1f9c25f65f772] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59996 - 45246 "HINFO IN 6905272824897924144.4523813527615294895. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024474237s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-358521
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-358521
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=default-k8s-diff-port-358521
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_58_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:58:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-358521
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:58:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:58:59 +0000   Mon, 29 Dec 2025 07:58:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:58:59 +0000   Mon, 29 Dec 2025 07:58:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:58:59 +0000   Mon, 29 Dec 2025 07:58:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 07:58:59 +0000   Mon, 29 Dec 2025 07:58:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-358521
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                9d844aca-8eb8-4646-88f8-c1cdf6fd6261
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-jv9qg                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-default-k8s-diff-port-358521                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-x9pdb                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-default-k8s-diff-port-358521             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-358521    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-58t84                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-default-k8s-diff-port-358521             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node default-k8s-diff-port-358521 event: Registered Node default-k8s-diff-port-358521 in Controller
	
	
	==> dmesg <==
	[Dec29 07:31] overlayfs: idmapped layers are currently not supported
	[Dec29 07:32] overlayfs: idmapped layers are currently not supported
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	[Dec29 07:54] overlayfs: idmapped layers are currently not supported
	[Dec29 07:55] overlayfs: idmapped layers are currently not supported
	[Dec29 07:56] overlayfs: idmapped layers are currently not supported
	[Dec29 07:57] overlayfs: idmapped layers are currently not supported
	[Dec29 07:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [40861b6039bbc50eda9f11be18b01405f6fbf98b4cf4ee759d732d5a46c9a61f] <==
	{"level":"info","ts":"2025-12-29T07:58:22.939404Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-29T07:58:23.915380Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-29T07:58:23.915429Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-29T07:58:23.915497Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-29T07:58:23.915508Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:58:23.915523Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:58:23.916600Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-29T07:58:23.916634Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:58:23.916654Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-29T07:58:23.916664Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-29T07:58:23.917913Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:58:23.918115Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:58:23.918264Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:58:23.918281Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:58:23.917913Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-358521 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:58:23.917933Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:58:23.920447Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:58:23.920835Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:58:23.923020Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-29T07:58:23.923340Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:58:23.923475Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:58:23.923536Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:58:23.924208Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:58:23.924207Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T07:58:23.924467Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	
	
	==> kernel <==
	 07:59:01 up  4:41,  0 user,  load average: 2.77, 2.10, 2.27
	Linux default-k8s-diff-port-358521 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7ef693ae0591cd8abfd1619ac8fc5e16c68249a530a87cc6a2396cd6d7a7e386] <==
	I1229 07:58:36.656746       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:58:36.657880       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1229 07:58:36.659501       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:58:36.659527       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:58:36.659542       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:58:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:58:36.858900       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:58:36.859814       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:58:36.859875       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:58:36.859996       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1229 07:58:37.160595       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 07:58:37.160714       1 metrics.go:72] Registering metrics
	I1229 07:58:37.160816       1 controller.go:711] "Syncing nftables rules"
	I1229 07:58:46.858728       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 07:58:46.858788       1 main.go:301] handling current node
	I1229 07:58:56.858977       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 07:58:56.859009       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4b4df6c7f3324f3ea166a809403acfa757cb0e28d210b2d84ec901db76dee94b] <==
	I1229 07:58:25.875277       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:58:25.887677       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1229 07:58:25.887913       1 aggregator.go:187] initial CRD sync complete...
	I1229 07:58:25.887956       1 autoregister_controller.go:144] Starting autoregister controller
	I1229 07:58:25.887984       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1229 07:58:25.888013       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:58:25.904901       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:58:26.583116       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1229 07:58:26.590674       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1229 07:58:26.590701       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:58:27.389878       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:58:27.446177       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:58:27.517494       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1229 07:58:27.528608       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1229 07:58:27.529781       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:58:27.534322       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:58:27.720306       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:58:28.702654       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:58:28.719645       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1229 07:58:28.747860       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1229 07:58:33.226164       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:58:33.279812       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:58:33.300454       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:58:33.677664       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1229 07:58:59.387258       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:55844: use of closed network connection
	
	
	==> kube-controller-manager [647a3e4cf46ae45fa6dbecc4769e75b3ddbea098ef425e4dae60d597c955985d] <==
	I1229 07:58:32.543758       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.543765       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.543771       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.550768       1 range_allocator.go:177] "Sending events to api server"
	I1229 07:58:32.550815       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1229 07:58:32.550847       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:58:32.550875       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.543778       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.543783       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.543789       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.543796       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.543801       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.543807       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.543812       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.543818       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.543824       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.556925       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.543832       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.543837       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.576657       1 range_allocator.go:433] "Set node PodCIDR" node="default-k8s-diff-port-358521" podCIDRs=["10.244.0.0/24"]
	I1229 07:58:32.633802       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.641131       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:32.641154       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:58:32.641160       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:58:47.544656       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [fe91e56e19227c9aaf9b23b22b39d020668b9f36945036ca6262dbbe911af3ec] <==
	I1229 07:58:34.603339       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:58:34.808263       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:58:34.909399       1 shared_informer.go:377] "Caches are synced"
	I1229 07:58:34.909435       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1229 07:58:34.909522       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:58:34.999558       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:58:34.999626       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:58:35.018344       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:58:35.018735       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:58:35.018761       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:58:35.032307       1 config.go:200] "Starting service config controller"
	I1229 07:58:35.032327       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:58:35.032355       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:58:35.032359       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:58:35.032372       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:58:35.032376       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:58:35.033266       1 config.go:309] "Starting node config controller"
	I1229 07:58:35.033283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:58:35.033295       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:58:35.133126       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:58:35.133173       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:58:35.133211       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4eb1dceed9cf9a3b5f0bb34238860e77b3685947bfe6c95486ec1e5d3f2b1fe0] <==
	E1229 07:58:25.866068       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:58:25.866140       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:58:25.866208       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:58:25.866235       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:58:25.866270       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:58:25.866300       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:58:25.866331       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:58:25.866363       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:58:25.866400       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:58:25.866429       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:58:25.866534       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:58:25.873171       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:58:25.873301       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:58:26.700547       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:58:26.813502       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:58:26.857120       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:58:26.864307       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:58:26.870205       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:58:26.973972       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:58:27.047190       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:58:27.052675       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:58:27.123787       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:58:27.148110       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:58:27.347483       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1229 07:58:30.019569       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:58:34 default-k8s-diff-port-358521 kubelet[1302]: W1229 07:58:34.402305    1302 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/crio-3f6dae21da78a4e6318ac331525469b4762745f9acd9c81d331179302c5523be WatchSource:0}: Error finding container 3f6dae21da78a4e6318ac331525469b4762745f9acd9c81d331179302c5523be: Status 404 returned error can't find the container with id 3f6dae21da78a4e6318ac331525469b4762745f9acd9c81d331179302c5523be
	Dec 29 07:58:34 default-k8s-diff-port-358521 kubelet[1302]: W1229 07:58:34.414892    1302 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/crio-fcf2f54b6fb94bf3eea5b8ad17f60287b16219aaf1232e801d49caf88d87d8c8 WatchSource:0}: Error finding container fcf2f54b6fb94bf3eea5b8ad17f60287b16219aaf1232e801d49caf88d87d8c8: Status 404 returned error can't find the container with id fcf2f54b6fb94bf3eea5b8ad17f60287b16219aaf1232e801d49caf88d87d8c8
	Dec 29 07:58:34 default-k8s-diff-port-358521 kubelet[1302]: E1229 07:58:34.831929    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-358521" containerName="kube-scheduler"
	Dec 29 07:58:34 default-k8s-diff-port-358521 kubelet[1302]: I1229 07:58:34.874195    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-58t84" podStartSLOduration=1.874178665 podStartE2EDuration="1.874178665s" podCreationTimestamp="2025-12-29 07:58:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:58:34.836432969 +0000 UTC m=+6.313824194" watchObservedRunningTime="2025-12-29 07:58:34.874178665 +0000 UTC m=+6.351569889"
	Dec 29 07:58:36 default-k8s-diff-port-358521 kubelet[1302]: E1229 07:58:36.616778    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-358521" containerName="kube-apiserver"
	Dec 29 07:58:37 default-k8s-diff-port-358521 kubelet[1302]: E1229 07:58:37.519387    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-358521" containerName="etcd"
	Dec 29 07:58:37 default-k8s-diff-port-358521 kubelet[1302]: I1229 07:58:37.532897    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-x9pdb" podStartSLOduration=2.440529739 podStartE2EDuration="4.532880348s" podCreationTimestamp="2025-12-29 07:58:33 +0000 UTC" firstStartedPulling="2025-12-29 07:58:34.421696635 +0000 UTC m=+5.899087860" lastFinishedPulling="2025-12-29 07:58:36.514047244 +0000 UTC m=+7.991438469" observedRunningTime="2025-12-29 07:58:36.829219163 +0000 UTC m=+8.306610396" watchObservedRunningTime="2025-12-29 07:58:37.532880348 +0000 UTC m=+9.010271589"
	Dec 29 07:58:38 default-k8s-diff-port-358521 kubelet[1302]: E1229 07:58:38.098830    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-358521" containerName="kube-controller-manager"
	Dec 29 07:58:44 default-k8s-diff-port-358521 kubelet[1302]: E1229 07:58:44.839582    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-358521" containerName="kube-scheduler"
	Dec 29 07:58:46 default-k8s-diff-port-358521 kubelet[1302]: E1229 07:58:46.633522    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-358521" containerName="kube-apiserver"
	Dec 29 07:58:47 default-k8s-diff-port-358521 kubelet[1302]: I1229 07:58:47.291724    1302 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 29 07:58:47 default-k8s-diff-port-358521 kubelet[1302]: I1229 07:58:47.426942    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d90c5059-2a03-4f81-ae2f-d4b1096b9980-config-volume\") pod \"coredns-7d764666f9-jv9qg\" (UID: \"d90c5059-2a03-4f81-ae2f-d4b1096b9980\") " pod="kube-system/coredns-7d764666f9-jv9qg"
	Dec 29 07:58:47 default-k8s-diff-port-358521 kubelet[1302]: I1229 07:58:47.427140    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nptpx\" (UniqueName: \"kubernetes.io/projected/d90c5059-2a03-4f81-ae2f-d4b1096b9980-kube-api-access-nptpx\") pod \"coredns-7d764666f9-jv9qg\" (UID: \"d90c5059-2a03-4f81-ae2f-d4b1096b9980\") " pod="kube-system/coredns-7d764666f9-jv9qg"
	Dec 29 07:58:47 default-k8s-diff-port-358521 kubelet[1302]: I1229 07:58:47.427320    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3a0877c5-d210-4e03-947b-f418cf49454a-tmp\") pod \"storage-provisioner\" (UID: \"3a0877c5-d210-4e03-947b-f418cf49454a\") " pod="kube-system/storage-provisioner"
	Dec 29 07:58:47 default-k8s-diff-port-358521 kubelet[1302]: I1229 07:58:47.427460    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5sg7\" (UniqueName: \"kubernetes.io/projected/3a0877c5-d210-4e03-947b-f418cf49454a-kube-api-access-s5sg7\") pod \"storage-provisioner\" (UID: \"3a0877c5-d210-4e03-947b-f418cf49454a\") " pod="kube-system/storage-provisioner"
	Dec 29 07:58:47 default-k8s-diff-port-358521 kubelet[1302]: E1229 07:58:47.521567    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-358521" containerName="etcd"
	Dec 29 07:58:47 default-k8s-diff-port-358521 kubelet[1302]: W1229 07:58:47.740857    1302 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/crio-167f4e3e09fb96221fadf6c52b8fc0cd0fd48b2a42dbf13c29dead59243e8773 WatchSource:0}: Error finding container 167f4e3e09fb96221fadf6c52b8fc0cd0fd48b2a42dbf13c29dead59243e8773: Status 404 returned error can't find the container with id 167f4e3e09fb96221fadf6c52b8fc0cd0fd48b2a42dbf13c29dead59243e8773
	Dec 29 07:58:48 default-k8s-diff-port-358521 kubelet[1302]: E1229 07:58:48.128239    1302 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-358521" containerName="kube-controller-manager"
	Dec 29 07:58:48 default-k8s-diff-port-358521 kubelet[1302]: E1229 07:58:48.850049    1302 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jv9qg" containerName="coredns"
	Dec 29 07:58:48 default-k8s-diff-port-358521 kubelet[1302]: I1229 07:58:48.873992    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-jv9qg" podStartSLOduration=14.873979135999999 podStartE2EDuration="14.873979136s" podCreationTimestamp="2025-12-29 07:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:58:48.873457319 +0000 UTC m=+20.350848577" watchObservedRunningTime="2025-12-29 07:58:48.873979136 +0000 UTC m=+20.351370361"
	Dec 29 07:58:48 default-k8s-diff-port-358521 kubelet[1302]: I1229 07:58:48.939602    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.939588332 podStartE2EDuration="13.939588332s" podCreationTimestamp="2025-12-29 07:58:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:58:48.914707487 +0000 UTC m=+20.392098712" watchObservedRunningTime="2025-12-29 07:58:48.939588332 +0000 UTC m=+20.416979557"
	Dec 29 07:58:49 default-k8s-diff-port-358521 kubelet[1302]: E1229 07:58:49.860334    1302 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jv9qg" containerName="coredns"
	Dec 29 07:58:50 default-k8s-diff-port-358521 kubelet[1302]: E1229 07:58:50.862733    1302 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jv9qg" containerName="coredns"
	Dec 29 07:58:51 default-k8s-diff-port-358521 kubelet[1302]: I1229 07:58:51.390140    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t942v\" (UniqueName: \"kubernetes.io/projected/e1c7f8ff-3089-47c4-82fd-1061811c1b63-kube-api-access-t942v\") pod \"busybox\" (UID: \"e1c7f8ff-3089-47c4-82fd-1061811c1b63\") " pod="default/busybox"
	Dec 29 07:58:53 default-k8s-diff-port-358521 kubelet[1302]: I1229 07:58:53.883855    1302 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.002619948 podStartE2EDuration="2.883837871s" podCreationTimestamp="2025-12-29 07:58:51 +0000 UTC" firstStartedPulling="2025-12-29 07:58:51.594106546 +0000 UTC m=+23.071497770" lastFinishedPulling="2025-12-29 07:58:53.475324468 +0000 UTC m=+24.952715693" observedRunningTime="2025-12-29 07:58:53.883473774 +0000 UTC m=+25.360864999" watchObservedRunningTime="2025-12-29 07:58:53.883837871 +0000 UTC m=+25.361229096"
	
	
	==> storage-provisioner [457ee18cac1b858846d44e75f9eedfe1dc8859ef1c846e21b1b80752448e5b1d] <==
	I1229 07:58:47.888548       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 07:58:47.935905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 07:58:47.935957       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 07:58:47.938232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:47.949392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:58:47.949625       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 07:58:47.949841       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-358521_20cf9d06-1008-445b-b5c3-17849758690b!
	I1229 07:58:47.951692       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e14c6f9d-8118-4d5d-8fe5-5b8946a8ad35", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-358521_20cf9d06-1008-445b-b5c3-17849758690b became leader
	W1229 07:58:47.951832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:47.974620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 07:58:48.053996       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-358521_20cf9d06-1008-445b-b5c3-17849758690b!
	W1229 07:58:49.979579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:49.989024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:51.997455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:52.006720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:54.010454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:54.015977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:56.019767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:56.025183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:58.029629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:58:58.034999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:59:00.053249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:59:00.110602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:59:02.115701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 07:59:02.123318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-358521 -n default-k8s-diff-port-358521
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-358521 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-921639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-921639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (423.525861ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:59:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-921639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-921639
helpers_test.go:244: (dbg) docker inspect newest-cni-921639:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6",
	        "Created": "2025-12-29T07:59:00.549488049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 947998,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:59:00.615975433Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/hosts",
	        "LogPath": "/var/lib/docker/containers/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6-json.log",
	        "Name": "/newest-cni-921639",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-921639:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-921639",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6",
	                "LowerDir": "/var/lib/docker/overlay2/9dd7621e42924d787c12dd5a7189cc8c6028b6914bc99cd6333f2ca0ee5cc9ba-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9dd7621e42924d787c12dd5a7189cc8c6028b6914bc99cd6333f2ca0ee5cc9ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9dd7621e42924d787c12dd5a7189cc8c6028b6914bc99cd6333f2ca0ee5cc9ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9dd7621e42924d787c12dd5a7189cc8c6028b6914bc99cd6333f2ca0ee5cc9ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-921639",
	                "Source": "/var/lib/docker/volumes/newest-cni-921639/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-921639",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-921639",
	                "name.minikube.sigs.k8s.io": "newest-cni-921639",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9cc9a5d28d910c8d2e7beec3af185a7acde0df52b87549d2d2e383ab5fd21b99",
	            "SandboxKey": "/var/run/docker/netns/9cc9a5d28d91",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33848"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33849"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33852"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33850"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33851"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-921639": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:0f:d8:8d:45:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0c6a7ca1056e7338b014b856b3d02e72a65fdee86fa30ab64b1ed8a6a45ad6d1",
	                    "EndpointID": "7d7d92be22d9b99d8b27efd924d02874c25fd786931b5e08201cd5d64dd156d4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-921639",
	                        "829fd18972bc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-921639 -n newest-cni-921639
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-921639 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-921639 logs -n 25: (2.078118556s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-822299 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:55 UTC │
	│ start   │ -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:55 UTC │ 29 Dec 25 07:56 UTC │
	│ image   │ no-preload-822299 image list --format=json                                                                                                                                                                                                    │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ pause   │ -p no-preload-822299 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │                     │
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-664524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │                     │
	│ stop    │ -p embed-certs-664524 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-664524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:58 UTC │
	│ ssh     │ force-systemd-flag-122604 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-122604    │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ delete  │ -p force-systemd-flag-122604                                                                                                                                                                                                                  │ force-systemd-flag-122604    │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ delete  │ -p disable-driver-mounts-062900                                                                                                                                                                                                               │ disable-driver-mounts-062900 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ start   │ -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ image   │ embed-certs-664524 image list --format=json                                                                                                                                                                                                   │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ pause   │ -p embed-certs-664524 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ delete  │ -p embed-certs-664524                                                                                                                                                                                                                         │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ delete  │ -p embed-certs-664524                                                                                                                                                                                                                         │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ start   │ -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-358521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-358521 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-358521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ start   │ -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-921639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:59:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:59:15.366705  950174 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:59:15.366832  950174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:59:15.366844  950174 out.go:374] Setting ErrFile to fd 2...
	I1229 07:59:15.366850  950174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:59:15.367194  950174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:59:15.367673  950174 out.go:368] Setting JSON to false
	I1229 07:59:15.368642  950174 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16907,"bootTime":1766978248,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:59:15.368733  950174 start.go:143] virtualization:  
	I1229 07:59:15.375056  950174 out.go:179] * [default-k8s-diff-port-358521] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:59:15.378874  950174 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:59:15.379070  950174 notify.go:221] Checking for updates...
	I1229 07:59:15.384638  950174 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:59:15.387427  950174 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:59:15.390267  950174 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:59:15.393138  950174 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:59:15.395979  950174 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:59:15.399191  950174 config.go:182] Loaded profile config "default-k8s-diff-port-358521": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:59:15.399758  950174 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:59:15.466403  950174 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:59:15.466519  950174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:59:15.590477  950174 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:59:15.573991561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:59:15.590575  950174 docker.go:319] overlay module found
	I1229 07:59:15.594015  950174 out.go:179] * Using the docker driver based on existing profile
	I1229 07:59:15.596977  950174 start.go:309] selected driver: docker
	I1229 07:59:15.596994  950174 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-358521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:59:15.597104  950174 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:59:15.597795  950174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:59:15.700705  950174 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:59:15.681634991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:59:15.701104  950174 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:59:15.701134  950174 cni.go:84] Creating CNI manager for ""
	I1229 07:59:15.701188  950174 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:59:15.701229  950174 start.go:353] cluster config:
	{Name:default-k8s-diff-port-358521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:59:15.704351  950174 out.go:179] * Starting "default-k8s-diff-port-358521" primary control-plane node in "default-k8s-diff-port-358521" cluster
	I1229 07:59:15.707121  950174 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:59:15.710117  950174 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:59:15.712907  950174 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:59:15.712959  950174 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:59:15.712972  950174 cache.go:65] Caching tarball of preloaded images
	I1229 07:59:15.713057  950174 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:59:15.713067  950174 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:59:15.713190  950174 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/config.json ...
	I1229 07:59:15.713410  950174 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:59:15.743000  950174 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:59:15.743020  950174 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:59:15.743035  950174 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:59:15.743066  950174 start.go:360] acquireMachinesLock for default-k8s-diff-port-358521: {Name:mk006058b00af6cad673fa6451223fba67b65954 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:59:15.743118  950174 start.go:364] duration metric: took 35.799µs to acquireMachinesLock for "default-k8s-diff-port-358521"
	I1229 07:59:15.743137  950174 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:59:15.743142  950174 fix.go:54] fixHost starting: 
	I1229 07:59:15.743399  950174 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:59:15.779772  950174 fix.go:112] recreateIfNeeded on default-k8s-diff-port-358521: state=Stopped err=<nil>
	W1229 07:59:15.779801  950174 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:59:15.761623  947476 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.024256808s
	I1229 07:59:18.283576  947476 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.546856635s
	I1229 07:59:20.239720  947476 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502677325s
	I1229 07:59:20.283204  947476 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1229 07:59:20.308754  947476 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1229 07:59:20.337121  947476 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1229 07:59:20.337311  947476 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-921639 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1229 07:59:20.357120  947476 kubeadm.go:319] [bootstrap-token] Using token: 925yc0.fj6g5hizjuv1yohx
	I1229 07:59:20.360211  947476 out.go:252]   - Configuring RBAC rules ...
	I1229 07:59:20.360344  947476 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1229 07:59:15.782991  950174 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-358521" ...
	I1229 07:59:15.783073  950174 cli_runner.go:164] Run: docker start default-k8s-diff-port-358521
	I1229 07:59:16.188231  950174 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:59:16.215134  950174 kic.go:430] container "default-k8s-diff-port-358521" state is running.
	I1229 07:59:16.215544  950174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-358521
	I1229 07:59:16.233426  950174 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/config.json ...
	I1229 07:59:16.233660  950174 machine.go:94] provisionDockerMachine start ...
	I1229 07:59:16.233723  950174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:59:16.259915  950174 main.go:144] libmachine: Using SSH client type: native
	I1229 07:59:16.260240  950174 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33853 <nil> <nil>}
	I1229 07:59:16.260261  950174 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:59:16.260930  950174 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1229 07:59:19.424601  950174 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-358521
	
	I1229 07:59:19.424625  950174 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-358521"
	I1229 07:59:19.424687  950174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:59:19.445930  950174 main.go:144] libmachine: Using SSH client type: native
	I1229 07:59:19.446373  950174 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33853 <nil> <nil>}
	I1229 07:59:19.446389  950174 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-358521 && echo "default-k8s-diff-port-358521" | sudo tee /etc/hostname
	I1229 07:59:19.629310  950174 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-358521
	
	I1229 07:59:19.629414  950174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:59:19.648809  950174 main.go:144] libmachine: Using SSH client type: native
	I1229 07:59:19.649168  950174 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33853 <nil> <nil>}
	I1229 07:59:19.649186  950174 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-358521' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-358521/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-358521' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:59:19.805047  950174 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:59:19.805133  950174 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:59:19.805179  950174 ubuntu.go:190] setting up certificates
	I1229 07:59:19.805216  950174 provision.go:84] configureAuth start
	I1229 07:59:19.805337  950174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-358521
	I1229 07:59:19.824215  950174 provision.go:143] copyHostCerts
	I1229 07:59:19.824284  950174 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:59:19.824300  950174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:59:19.824389  950174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:59:19.824544  950174 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:59:19.824550  950174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:59:19.824579  950174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:59:19.824651  950174 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:59:19.824656  950174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:59:19.824681  950174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:59:19.824731  950174 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-358521 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-358521 localhost minikube]
	I1229 07:59:20.370850  947476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1229 07:59:20.392275  947476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1229 07:59:20.393916  947476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1229 07:59:20.401025  947476 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1229 07:59:20.406289  947476 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1229 07:59:20.650138  947476 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1229 07:59:21.091694  947476 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1229 07:59:21.651814  947476 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1229 07:59:21.653424  947476 kubeadm.go:319] 
	I1229 07:59:21.653508  947476 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1229 07:59:21.653519  947476 kubeadm.go:319] 
	I1229 07:59:21.653592  947476 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1229 07:59:21.653600  947476 kubeadm.go:319] 
	I1229 07:59:21.653624  947476 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1229 07:59:21.653684  947476 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1229 07:59:21.653745  947476 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1229 07:59:21.653753  947476 kubeadm.go:319] 
	I1229 07:59:21.653805  947476 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1229 07:59:21.653812  947476 kubeadm.go:319] 
	I1229 07:59:21.653858  947476 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1229 07:59:21.653865  947476 kubeadm.go:319] 
	I1229 07:59:21.653914  947476 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1229 07:59:21.653988  947476 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1229 07:59:21.654056  947476 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1229 07:59:21.654064  947476 kubeadm.go:319] 
	I1229 07:59:21.654143  947476 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1229 07:59:21.654219  947476 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1229 07:59:21.654227  947476 kubeadm.go:319] 
	I1229 07:59:21.654306  947476 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 925yc0.fj6g5hizjuv1yohx \
	I1229 07:59:21.654406  947476 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 \
	I1229 07:59:21.654428  947476 kubeadm.go:319] 	--control-plane 
	I1229 07:59:21.654445  947476 kubeadm.go:319] 
	I1229 07:59:21.654530  947476 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1229 07:59:21.654543  947476 kubeadm.go:319] 
	I1229 07:59:21.654623  947476 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 925yc0.fj6g5hizjuv1yohx \
	I1229 07:59:21.654723  947476 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:694e62cf6672eeab93d1b16a5d0241de93f50bf689e0759e1e71519cfe1b6934 
	I1229 07:59:21.660107  947476 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:59:21.660539  947476 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:59:21.660657  947476 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:59:21.660680  947476 cni.go:84] Creating CNI manager for ""
	I1229 07:59:21.660690  947476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:59:21.663796  947476 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1229 07:59:20.581132  950174 provision.go:177] copyRemoteCerts
	I1229 07:59:20.581205  950174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:59:20.581249  950174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:59:20.604093  950174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:59:20.719503  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:59:20.755871  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1229 07:59:20.779149  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:59:20.796755  950174 provision.go:87] duration metric: took 991.506519ms to configureAuth
	I1229 07:59:20.796858  950174 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:59:20.797049  950174 config.go:182] Loaded profile config "default-k8s-diff-port-358521": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:59:20.797188  950174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:59:20.830457  950174 main.go:144] libmachine: Using SSH client type: native
	I1229 07:59:20.830778  950174 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33853 <nil> <nil>}
	I1229 07:59:20.830792  950174 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1229 07:59:21.230494  950174 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:59:21.230565  950174 machine.go:97] duration metric: took 4.996894724s to provisionDockerMachine
	I1229 07:59:21.230593  950174 start.go:293] postStartSetup for "default-k8s-diff-port-358521" (driver="docker")
	I1229 07:59:21.230636  950174 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:59:21.230727  950174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:59:21.230814  950174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:59:21.258767  950174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:59:21.374032  950174 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:59:21.377782  950174 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:59:21.377808  950174 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:59:21.377820  950174 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:59:21.377877  950174 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:59:21.377956  950174 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:59:21.378062  950174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:59:21.386389  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:59:21.418249  950174 start.go:296] duration metric: took 187.623554ms for postStartSetup
	I1229 07:59:21.418332  950174 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:59:21.418370  950174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:59:21.438394  950174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:59:21.550188  950174 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:59:21.555332  950174 fix.go:56] duration metric: took 5.812182071s for fixHost
	I1229 07:59:21.555358  950174 start.go:83] releasing machines lock for "default-k8s-diff-port-358521", held for 5.812232114s
	I1229 07:59:21.555434  950174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-358521
	I1229 07:59:21.575270  950174 ssh_runner.go:195] Run: cat /version.json
	I1229 07:59:21.575325  950174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:59:21.575596  950174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:59:21.575651  950174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:59:21.597047  950174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:59:21.606715  950174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:59:21.704688  950174 ssh_runner.go:195] Run: systemctl --version
	I1229 07:59:21.809534  950174 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:59:21.870446  950174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:59:21.876535  950174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:59:21.876709  950174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:59:21.886770  950174 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:59:21.886851  950174 start.go:496] detecting cgroup driver to use...
	I1229 07:59:21.886933  950174 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:59:21.887030  950174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:59:21.904496  950174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:59:21.925816  950174 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:59:21.925943  950174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:59:21.943186  950174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:59:21.961892  950174 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:59:22.148707  950174 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:59:22.381150  950174 docker.go:234] disabling docker service ...
	I1229 07:59:22.381273  950174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:59:22.399014  950174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:59:22.416463  950174 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:59:22.583136  950174 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:59:22.710450  950174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:59:22.728215  950174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:59:22.744081  950174 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:59:22.744155  950174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:22.754391  950174 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:59:22.754527  950174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:22.767300  950174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:22.778985  950174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:22.790131  950174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:59:22.798512  950174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:22.807825  950174 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:22.816129  950174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:22.827502  950174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:59:22.836365  950174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:59:22.843973  950174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:59:22.996626  950174 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:59:23.201147  950174 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:59:23.201227  950174 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:59:23.204925  950174 start.go:574] Will wait 60s for crictl version
	I1229 07:59:23.204986  950174 ssh_runner.go:195] Run: which crictl
	I1229 07:59:23.208374  950174 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:59:23.245663  950174 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:59:23.245754  950174 ssh_runner.go:195] Run: crio --version
	I1229 07:59:23.276745  950174 ssh_runner.go:195] Run: crio --version
	I1229 07:59:23.310891  950174 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:59:23.313718  950174 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-358521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:59:23.330831  950174 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1229 07:59:23.334649  950174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:59:23.344447  950174 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-358521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:59:23.344572  950174 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:59:23.344626  950174 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:59:23.384589  950174 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:59:23.384610  950174 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:59:23.384664  950174 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:59:23.428229  950174 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:59:23.428291  950174 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:59:23.428321  950174 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.35.0 crio true true} ...
	I1229 07:59:23.428463  950174 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-358521 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:59:23.428584  950174 ssh_runner.go:195] Run: crio config
	I1229 07:59:23.521794  950174 cni.go:84] Creating CNI manager for ""
	I1229 07:59:23.521869  950174 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:59:23.521897  950174 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:59:23.521951  950174 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-358521 NodeName:default-k8s-diff-port-358521 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:59:23.522136  950174 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-358521"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:59:23.522251  950174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:59:23.530998  950174 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:59:23.531116  950174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:59:23.540979  950174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1229 07:59:23.554964  950174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:59:23.568499  950174 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1229 07:59:23.582546  950174 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:59:23.586225  950174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:59:23.595985  950174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:59:23.715004  950174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:59:23.737578  950174 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521 for IP: 192.168.76.2
	I1229 07:59:23.737601  950174 certs.go:195] generating shared ca certs ...
	I1229 07:59:23.737634  950174 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:23.737782  950174 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:59:23.737834  950174 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:59:23.737846  950174 certs.go:257] generating profile certs ...
	I1229 07:59:23.737935  950174 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.key
	I1229 07:59:23.738020  950174 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key.22f9918a
	I1229 07:59:23.738069  950174 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.key
	I1229 07:59:23.738182  950174 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:59:23.738219  950174 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:59:23.738232  950174 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:59:23.738257  950174 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:59:23.738286  950174 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:59:23.738313  950174 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:59:23.738363  950174 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:59:23.739004  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:59:23.759548  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:59:23.779493  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:59:23.797396  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:59:23.815123  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1229 07:59:23.839917  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:59:23.865267  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:59:23.885844  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:59:23.924441  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:59:23.990550  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:59:24.025624  950174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:59:24.058911  950174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:59:24.077478  950174 ssh_runner.go:195] Run: openssl version
	I1229 07:59:24.084354  950174 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:59:24.092259  950174 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:59:24.100363  950174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:59:24.105580  950174 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:59:24.105650  950174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:59:24.150761  950174 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:59:24.159955  950174 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:59:24.167882  950174 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:59:24.177461  950174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:59:24.181289  950174 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:59:24.181358  950174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:59:24.223188  950174 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:59:24.232076  950174 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:59:24.241295  950174 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:59:24.249397  950174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:59:24.253244  950174 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:59:24.253331  950174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:59:24.296308  950174 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:59:24.303926  950174 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:59:24.307921  950174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:59:24.354125  950174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:59:24.405731  950174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:59:24.453915  950174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:59:24.497783  950174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:59:24.547406  950174 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:59:24.592587  950174 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-358521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-358521 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:59:24.592726  950174 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:59:24.592853  950174 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:59:24.638871  950174 cri.go:96] found id: ""
	I1229 07:59:24.638961  950174 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:59:24.650213  950174 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:59:24.650232  950174 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:59:24.650300  950174 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:59:24.657857  950174 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:59:24.658335  950174 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-358521" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:59:24.658481  950174 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-739997/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-358521" cluster setting kubeconfig missing "default-k8s-diff-port-358521" context setting]
	I1229 07:59:24.658840  950174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:24.660363  950174 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:59:24.674781  950174 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1229 07:59:24.674856  950174 kubeadm.go:602] duration metric: took 24.618126ms to restartPrimaryControlPlane
	I1229 07:59:24.674882  950174 kubeadm.go:403] duration metric: took 82.305761ms to StartCluster
	I1229 07:59:24.674925  950174 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:24.675003  950174 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:59:24.675663  950174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:24.675899  950174 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:59:24.676294  950174 config.go:182] Loaded profile config "default-k8s-diff-port-358521": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:59:24.676355  950174 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:59:24.676447  950174 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-358521"
	I1229 07:59:24.676461  950174 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-358521"
	W1229 07:59:24.676468  950174 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:59:24.676492  950174 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-358521"
	I1229 07:59:24.676500  950174 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-358521"
	I1229 07:59:24.676509  950174 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-358521"
	W1229 07:59:24.676516  950174 addons.go:248] addon dashboard should already be in state true
	I1229 07:59:24.676532  950174 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-358521"
	I1229 07:59:24.676536  950174 host.go:66] Checking if "default-k8s-diff-port-358521" exists ...
	I1229 07:59:24.676887  950174 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:59:24.677151  950174 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:59:24.676494  950174 host.go:66] Checking if "default-k8s-diff-port-358521" exists ...
	I1229 07:59:24.677986  950174 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:59:24.690220  950174 out.go:179] * Verifying Kubernetes components...
	I1229 07:59:24.694959  950174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:59:24.724356  950174 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-358521"
	W1229 07:59:24.724377  950174 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:59:24.724404  950174 host.go:66] Checking if "default-k8s-diff-port-358521" exists ...
	I1229 07:59:24.728995  950174 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 07:59:24.730978  950174 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1229 07:59:24.735059  950174 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:59:24.738386  950174 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:59:24.738417  950174 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:59:24.738486  950174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:59:24.744710  950174 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:59:21.666791  947476 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1229 07:59:21.671182  947476 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1229 07:59:21.671208  947476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1229 07:59:21.687886  947476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1229 07:59:22.129882  947476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1229 07:59:22.130020  947476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:59:22.130111  947476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-921639 minikube.k8s.io/updated_at=2025_12_29T07_59_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8 minikube.k8s.io/name=newest-cni-921639 minikube.k8s.io/primary=true
	I1229 07:59:22.404917  947476 ops.go:34] apiserver oom_adj: -16
	I1229 07:59:22.405044  947476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:59:22.905214  947476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:59:23.405942  947476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:59:23.905241  947476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:59:24.405976  947476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:59:24.905150  947476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:59:24.749090  950174 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:59:24.749114  950174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:59:24.749181  950174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:59:24.774461  950174 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:59:24.774481  950174 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:59:24.774539  950174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 07:59:24.790204  950174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:59:24.814652  950174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:59:24.815492  950174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 07:59:25.122277  950174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:59:25.174704  950174 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-358521" to be "Ready" ...
	I1229 07:59:25.341754  950174 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:59:25.341789  950174 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:59:25.405590  947476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1229 07:59:25.653429  947476 kubeadm.go:1114] duration metric: took 3.523449796s to wait for elevateKubeSystemPrivileges
	I1229 07:59:25.653466  947476 kubeadm.go:403] duration metric: took 16.375912705s to StartCluster
	I1229 07:59:25.653483  947476 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:25.653559  947476 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:59:25.654535  947476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:25.654752  947476 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:59:25.654838  947476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1229 07:59:25.655107  947476 config.go:182] Loaded profile config "newest-cni-921639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:59:25.655146  947476 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:59:25.655205  947476 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-921639"
	I1229 07:59:25.655221  947476 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-921639"
	I1229 07:59:25.655242  947476 host.go:66] Checking if "newest-cni-921639" exists ...
	I1229 07:59:25.655770  947476 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:25.656321  947476 addons.go:70] Setting default-storageclass=true in profile "newest-cni-921639"
	I1229 07:59:25.656354  947476 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-921639"
	I1229 07:59:25.656661  947476 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:25.661566  947476 out.go:179] * Verifying Kubernetes components...
	I1229 07:59:25.668555  947476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:59:25.694381  947476 addons.go:239] Setting addon default-storageclass=true in "newest-cni-921639"
	I1229 07:59:25.694421  947476 host.go:66] Checking if "newest-cni-921639" exists ...
	I1229 07:59:25.694834  947476 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:25.710750  947476 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:59:25.715970  947476 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:59:25.715996  947476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:59:25.716063  947476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:25.724423  947476 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:59:25.724444  947476 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:59:25.724511  947476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:25.759280  947476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33848 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:25.760141  947476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33848 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:26.272434  947476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:59:26.339856  947476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:59:26.632044  947476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1229 07:59:26.632208  947476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:59:27.989411  947476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.716900466s)
	I1229 07:59:27.989490  947476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.649569885s)
	I1229 07:59:27.989800  947476 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.357555774s)
	I1229 07:59:27.991050  947476 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.358931244s)
	I1229 07:59:27.991077  947476 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1229 07:59:27.991910  947476 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:59:27.991977  947476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:59:28.050658  947476 api_server.go:72] duration metric: took 2.395877786s to wait for apiserver process to appear ...
	I1229 07:59:28.050686  947476 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:59:28.050710  947476 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:59:28.085836  947476 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1229 07:59:28.086921  947476 api_server.go:141] control plane version: v1.35.0
	I1229 07:59:28.086952  947476 api_server.go:131] duration metric: took 36.259522ms to wait for apiserver health ...
	I1229 07:59:28.086964  947476 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:59:28.092445  947476 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1229 07:59:28.095367  947476 addons.go:530] duration metric: took 2.440216082s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1229 07:59:28.132710  947476 system_pods.go:59] 9 kube-system pods found
	I1229 07:59:28.132748  947476 system_pods.go:61] "coredns-7d764666f9-fwcsk" [9db7334f-a680-41ad-b032-4d68743da9e2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1229 07:59:28.132756  947476 system_pods.go:61] "coredns-7d764666f9-t9fsk" [a6a64d27-c39d-4289-ad6c-240e23b61c7c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1229 07:59:28.132763  947476 system_pods.go:61] "etcd-newest-cni-921639" [f5f342fa-4ca0-479a-aa4d-2cdc7f5d0a2c] Running
	I1229 07:59:28.132774  947476 system_pods.go:61] "kindnet-lbs8p" [1e342362-a751-4336-93d1-a1bdb3c1f16a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1229 07:59:28.132837  947476 system_pods.go:61] "kube-apiserver-newest-cni-921639" [dd73ba12-116f-4275-9d77-4e5f5f761657] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:59:28.132853  947476 system_pods.go:61] "kube-controller-manager-newest-cni-921639" [de7abdd9-1e08-4a9f-a47d-78f714e98094] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:59:28.132861  947476 system_pods.go:61] "kube-proxy-z62qd" [bb67aae3-6fe1-4c26-b760-5b1ba067fd96] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1229 07:59:28.132879  947476 system_pods.go:61] "kube-scheduler-newest-cni-921639" [9bcb6bda-4afb-4736-873f-90a2fdc42008] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:59:28.132885  947476 system_pods.go:61] "storage-provisioner" [2be6bd44-f3fd-4b78-9d33-27b7c3452b52] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1229 07:59:28.132892  947476 system_pods.go:74] duration metric: took 45.922104ms to wait for pod list to return data ...
	I1229 07:59:28.132900  947476 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:59:28.147869  947476 default_sa.go:45] found service account: "default"
	I1229 07:59:28.147897  947476 default_sa.go:55] duration metric: took 14.987061ms for default service account to be created ...
	I1229 07:59:28.147911  947476 kubeadm.go:587] duration metric: took 2.493135697s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1229 07:59:28.147928  947476 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:59:28.174354  947476 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:59:28.174387  947476 node_conditions.go:123] node cpu capacity is 2
	I1229 07:59:28.174400  947476 node_conditions.go:105] duration metric: took 26.46674ms to run NodePressure ...
	I1229 07:59:28.174414  947476 start.go:242] waiting for startup goroutines ...
	I1229 07:59:28.517358  947476 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-921639" context rescaled to 1 replicas
	I1229 07:59:28.517394  947476 start.go:247] waiting for cluster config update ...
	I1229 07:59:28.517407  947476 start.go:256] writing updated cluster config ...
	I1229 07:59:28.517708  947476 ssh_runner.go:195] Run: rm -f paused
	I1229 07:59:28.611717  947476 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1229 07:59:28.616102  947476 out.go:203] 
	W1229 07:59:28.619071  947476 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1229 07:59:28.622067  947476 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:59:28.625117  947476 out.go:179] * Done! kubectl is now configured to use "newest-cni-921639" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 29 07:59:13 newest-cni-921639 crio[834]: time="2025-12-29T07:59:13.947858064Z" level=info msg="Created container caa648acd54810d59c715ebe3582628dbb103b2105a8345faa66638dae073e73: kube-system/kube-scheduler-newest-cni-921639/kube-scheduler" id=83d441d8-fca4-4dd4-b49f-21add951108c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:59:13 newest-cni-921639 crio[834]: time="2025-12-29T07:59:13.948653245Z" level=info msg="Starting container: caa648acd54810d59c715ebe3582628dbb103b2105a8345faa66638dae073e73" id=24d90a95-f3bb-46aa-bf7f-8821ae741b6e name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:59:13 newest-cni-921639 crio[834]: time="2025-12-29T07:59:13.950835605Z" level=info msg="Started container" PID=1237 containerID=caa648acd54810d59c715ebe3582628dbb103b2105a8345faa66638dae073e73 description=kube-system/kube-scheduler-newest-cni-921639/kube-scheduler id=24d90a95-f3bb-46aa-bf7f-8821ae741b6e name=/runtime.v1.RuntimeService/StartContainer sandboxID=6e1eb6160248cfa72b4e516861fffd307dff65d57a5ad7eff59845bfedb3cb5e
	Dec 29 07:59:26 newest-cni-921639 crio[834]: time="2025-12-29T07:59:26.737646175Z" level=info msg="Running pod sandbox: kube-system/kindnet-lbs8p/POD" id=40539a31-f492-4467-bf55-d1a583473345 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:26 newest-cni-921639 crio[834]: time="2025-12-29T07:59:26.737719415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:26 newest-cni-921639 crio[834]: time="2025-12-29T07:59:26.746443766Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=40539a31-f492-4467-bf55-d1a583473345 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:26 newest-cni-921639 crio[834]: time="2025-12-29T07:59:26.76617063Z" level=info msg="Ran pod sandbox f9d8d37a4f24b3c5b84c79061ab74d620d562695f1e3a9fe4df8cdb3a22032b2 with infra container: kube-system/kindnet-lbs8p/POD" id=40539a31-f492-4467-bf55-d1a583473345 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:26 newest-cni-921639 crio[834]: time="2025-12-29T07:59:26.775046022Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=2842cab7-05de-4ffe-95fb-65cfd0d95316 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:59:26 newest-cni-921639 crio[834]: time="2025-12-29T07:59:26.775188054Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=2842cab7-05de-4ffe-95fb-65cfd0d95316 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:59:26 newest-cni-921639 crio[834]: time="2025-12-29T07:59:26.775272863Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=2842cab7-05de-4ffe-95fb-65cfd0d95316 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:59:26 newest-cni-921639 crio[834]: time="2025-12-29T07:59:26.777144557Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=047cc034-39bb-49ad-90c6-8e3d60a66ab1 name=/runtime.v1.ImageService/PullImage
	Dec 29 07:59:26 newest-cni-921639 crio[834]: time="2025-12-29T07:59:26.777468506Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 29 07:59:28 newest-cni-921639 crio[834]: time="2025-12-29T07:59:28.194062828Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-z62qd/POD" id=af420deb-fecd-40e1-ab0f-82f8f8154473 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:28 newest-cni-921639 crio[834]: time="2025-12-29T07:59:28.194175888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:28 newest-cni-921639 crio[834]: time="2025-12-29T07:59:28.20975982Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=af420deb-fecd-40e1-ab0f-82f8f8154473 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:28 newest-cni-921639 crio[834]: time="2025-12-29T07:59:28.243993899Z" level=info msg="Ran pod sandbox 796d527ad6becd52174baeb1e3f22e1c67ab9e2f07286019d185f4da1e4542eb with infra container: kube-system/kube-proxy-z62qd/POD" id=af420deb-fecd-40e1-ab0f-82f8f8154473 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:28 newest-cni-921639 crio[834]: time="2025-12-29T07:59:28.253210922Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=1798b468-cc83-4f0f-ba08-09afd697524a name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:59:28 newest-cni-921639 crio[834]: time="2025-12-29T07:59:28.257135194Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=8d44b376-84d5-42f3-8e31-46f1225b3bd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:59:28 newest-cni-921639 crio[834]: time="2025-12-29T07:59:28.266606382Z" level=info msg="Creating container: kube-system/kube-proxy-z62qd/kube-proxy" id=85ddca62-867c-443e-a4fa-7a51284ba3b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:59:28 newest-cni-921639 crio[834]: time="2025-12-29T07:59:28.266760582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:28 newest-cni-921639 crio[834]: time="2025-12-29T07:59:28.285558323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:28 newest-cni-921639 crio[834]: time="2025-12-29T07:59:28.289384281Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:28 newest-cni-921639 crio[834]: time="2025-12-29T07:59:28.33419564Z" level=info msg="Created container d0f4311d06b33e5b9d1b729a827ff6b9a4ab5cf6de221c22107158c922ac3711: kube-system/kube-proxy-z62qd/kube-proxy" id=85ddca62-867c-443e-a4fa-7a51284ba3b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:59:28 newest-cni-921639 crio[834]: time="2025-12-29T07:59:28.339855106Z" level=info msg="Starting container: d0f4311d06b33e5b9d1b729a827ff6b9a4ab5cf6de221c22107158c922ac3711" id=b3b7b66b-af26-4c0f-808b-f33863a2787d name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:59:28 newest-cni-921639 crio[834]: time="2025-12-29T07:59:28.360773315Z" level=info msg="Started container" PID=1473 containerID=d0f4311d06b33e5b9d1b729a827ff6b9a4ab5cf6de221c22107158c922ac3711 description=kube-system/kube-proxy-z62qd/kube-proxy id=b3b7b66b-af26-4c0f-808b-f33863a2787d name=/runtime.v1.RuntimeService/StartContainer sandboxID=796d527ad6becd52174baeb1e3f22e1c67ab9e2f07286019d185f4da1e4542eb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d0f4311d06b33       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   2 seconds ago       Running             kube-proxy                0                   796d527ad6bec       kube-proxy-z62qd                            kube-system
	caa648acd5481       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   17 seconds ago      Running             kube-scheduler            0                   6e1eb6160248c       kube-scheduler-newest-cni-921639            kube-system
	6a7b24a816742       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   17 seconds ago      Running             kube-apiserver            0                   ccab0fef0fc87       kube-apiserver-newest-cni-921639            kube-system
	a33d16b1b53d2       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   17 seconds ago      Running             etcd                      0                   f13cff7af706a       etcd-newest-cni-921639                      kube-system
	9e825281d5ee7       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   17 seconds ago      Running             kube-controller-manager   0                   dd83718659b00       kube-controller-manager-newest-cni-921639   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-921639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-921639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=newest-cni-921639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_59_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:59:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-921639
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:59:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:59:21 +0000   Mon, 29 Dec 2025 07:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:59:21 +0000   Mon, 29 Dec 2025 07:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:59:21 +0000   Mon, 29 Dec 2025 07:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 29 Dec 2025 07:59:21 +0000   Mon, 29 Dec 2025 07:59:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-921639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                9187b359-79d8-4272-9631-ead43f58bc0f
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-921639                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10s
	  kube-system                 kindnet-lbs8p                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6s
	  kube-system                 kube-apiserver-newest-cni-921639             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 kube-controller-manager-newest-cni-921639    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-proxy-z62qd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  kube-system                 kube-scheduler-newest-cni-921639             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  6s    node-controller  Node newest-cni-921639 event: Registered Node newest-cni-921639 in Controller
	
	
	==> dmesg <==
	[ +36.397581] overlayfs: idmapped layers are currently not supported
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	[Dec29 07:54] overlayfs: idmapped layers are currently not supported
	[Dec29 07:55] overlayfs: idmapped layers are currently not supported
	[Dec29 07:56] overlayfs: idmapped layers are currently not supported
	[Dec29 07:57] overlayfs: idmapped layers are currently not supported
	[Dec29 07:58] overlayfs: idmapped layers are currently not supported
	[Dec29 07:59] overlayfs: idmapped layers are currently not supported
	[ +10.773618] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a33d16b1b53d24ac992a07f29bb66d60e825528d13942e7c63a38090c3777167] <==
	{"level":"info","ts":"2025-12-29T07:59:14.052045Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:59:14.812195Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-29T07:59:14.812305Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-29T07:59:14.812379Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-29T07:59:14.812402Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:59:14.812419Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:14.813492Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:14.813529Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:59:14.813547Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:14.813556Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:14.815266Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-921639 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:59:14.815426Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:59:14.815576Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:59:14.815664Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:59:14.818293Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:59:14.818406Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:59:14.819239Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:59:14.823976Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:59:14.827146Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:59:14.827271Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:59:14.827308Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-29T07:59:14.830199Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:59:14.845301Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-29T07:59:14.875240Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-29T07:59:14.876058Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	
	
	==> kernel <==
	 07:59:31 up  4:42,  0 user,  load average: 4.23, 2.47, 2.38
	Linux newest-cni-921639 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [6a7b24a8167423666c818c56167ea09110b6daf61d05de749208683893695e21] <==
	I1229 07:59:18.296912       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:59:18.297003       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:59:18.302835       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1229 07:59:18.302943       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1229 07:59:18.314589       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:59:18.329830       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:59:18.329905       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1229 07:59:18.348881       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:59:18.996532       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1229 07:59:19.005579       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1229 07:59:19.005757       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:59:19.870828       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:59:19.932366       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:59:20.061379       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1229 07:59:20.104104       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1229 07:59:20.114661       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:59:20.136560       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:59:20.175732       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:59:21.058629       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:59:21.084919       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1229 07:59:21.109122       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1229 07:59:25.396292       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:59:25.419044       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:59:25.907855       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1229 07:59:26.410224       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9e825281d5ee7653e6c01f438e98fdb1896f91efb23294cb2d945ae0a429c523] <==
	I1229 07:59:25.252553       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.272881       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.299273       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.299557       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.299646       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.299696       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.299932       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.301035       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.301095       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.301215       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.301354       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.305371       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.307923       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.308463       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.308621       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.308635       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:59:25.308640       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:59:25.312152       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.312233       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.312263       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.312452       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.312493       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.337522       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.337559       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:25.337594       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [d0f4311d06b33e5b9d1b729a827ff6b9a4ab5cf6de221c22107158c922ac3711] <==
	I1229 07:59:28.438558       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:59:28.530947       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:59:28.636919       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:28.636961       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1229 07:59:28.637064       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:59:28.774591       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:59:28.774642       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:59:28.801892       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:59:28.802206       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:59:28.802223       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:59:28.805196       1 config.go:200] "Starting service config controller"
	I1229 07:59:28.805209       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:59:28.805227       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:59:28.805231       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:59:28.805241       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:59:28.805245       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:59:28.805855       1 config.go:309] "Starting node config controller"
	I1229 07:59:28.805862       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:59:28.805868       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:59:28.905467       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:59:28.905496       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:59:28.905531       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [caa648acd54810d59c715ebe3582628dbb103b2105a8345faa66638dae073e73] <==
	E1229 07:59:18.305830       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:59:18.305874       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:59:18.305914       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:59:18.305980       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:59:18.306036       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:59:18.306070       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:59:18.306099       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:59:18.309186       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:59:18.309270       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:59:18.309341       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:59:18.309389       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:59:18.309484       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:59:18.309528       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:59:18.309561       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:59:18.317063       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1229 07:59:19.134337       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1229 07:59:19.287606       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:59:19.319118       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:59:19.325223       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:59:19.400932       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:59:19.463390       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:59:19.463533       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:59:19.535455       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:59:19.709670       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1229 07:59:22.161063       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:59:22 newest-cni-921639 kubelet[1285]: I1229 07:59:22.529893    1285 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-921639" podStartSLOduration=1.529878458 podStartE2EDuration="1.529878458s" podCreationTimestamp="2025-12-29 07:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:59:22.481919589 +0000 UTC m=+1.522701975" watchObservedRunningTime="2025-12-29 07:59:22.529878458 +0000 UTC m=+1.570660852"
	Dec 29 07:59:23 newest-cni-921639 kubelet[1285]: E1229 07:59:23.396627    1285 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-921639" containerName="etcd"
	Dec 29 07:59:23 newest-cni-921639 kubelet[1285]: E1229 07:59:23.397173    1285 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-921639" containerName="kube-apiserver"
	Dec 29 07:59:23 newest-cni-921639 kubelet[1285]: E1229 07:59:23.397426    1285 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-921639" containerName="kube-scheduler"
	Dec 29 07:59:24 newest-cni-921639 kubelet[1285]: E1229 07:59:24.399829    1285 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-921639" containerName="kube-scheduler"
	Dec 29 07:59:25 newest-cni-921639 kubelet[1285]: I1229 07:59:25.163802    1285 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 29 07:59:25 newest-cni-921639 kubelet[1285]: I1229 07:59:25.164579    1285 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 29 07:59:26 newest-cni-921639 kubelet[1285]: I1229 07:59:26.237894    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1e342362-a751-4336-93d1-a1bdb3c1f16a-cni-cfg\") pod \"kindnet-lbs8p\" (UID: \"1e342362-a751-4336-93d1-a1bdb3c1f16a\") " pod="kube-system/kindnet-lbs8p"
	Dec 29 07:59:26 newest-cni-921639 kubelet[1285]: I1229 07:59:26.238940    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e342362-a751-4336-93d1-a1bdb3c1f16a-xtables-lock\") pod \"kindnet-lbs8p\" (UID: \"1e342362-a751-4336-93d1-a1bdb3c1f16a\") " pod="kube-system/kindnet-lbs8p"
	Dec 29 07:59:26 newest-cni-921639 kubelet[1285]: I1229 07:59:26.239014    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e342362-a751-4336-93d1-a1bdb3c1f16a-lib-modules\") pod \"kindnet-lbs8p\" (UID: \"1e342362-a751-4336-93d1-a1bdb3c1f16a\") " pod="kube-system/kindnet-lbs8p"
	Dec 29 07:59:26 newest-cni-921639 kubelet[1285]: I1229 07:59:26.239037    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7tn8\" (UniqueName: \"kubernetes.io/projected/1e342362-a751-4336-93d1-a1bdb3c1f16a-kube-api-access-c7tn8\") pod \"kindnet-lbs8p\" (UID: \"1e342362-a751-4336-93d1-a1bdb3c1f16a\") " pod="kube-system/kindnet-lbs8p"
	Dec 29 07:59:26 newest-cni-921639 kubelet[1285]: E1229 07:59:26.427477    1285 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-921639\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-921639' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 29 07:59:26 newest-cni-921639 kubelet[1285]: I1229 07:59:26.440916    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bb67aae3-6fe1-4c26-b760-5b1ba067fd96-kube-proxy\") pod \"kube-proxy-z62qd\" (UID: \"bb67aae3-6fe1-4c26-b760-5b1ba067fd96\") " pod="kube-system/kube-proxy-z62qd"
	Dec 29 07:59:26 newest-cni-921639 kubelet[1285]: I1229 07:59:26.440973    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb67aae3-6fe1-4c26-b760-5b1ba067fd96-lib-modules\") pod \"kube-proxy-z62qd\" (UID: \"bb67aae3-6fe1-4c26-b760-5b1ba067fd96\") " pod="kube-system/kube-proxy-z62qd"
	Dec 29 07:59:26 newest-cni-921639 kubelet[1285]: I1229 07:59:26.440994    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pb7c\" (UniqueName: \"kubernetes.io/projected/bb67aae3-6fe1-4c26-b760-5b1ba067fd96-kube-api-access-2pb7c\") pod \"kube-proxy-z62qd\" (UID: \"bb67aae3-6fe1-4c26-b760-5b1ba067fd96\") " pod="kube-system/kube-proxy-z62qd"
	Dec 29 07:59:26 newest-cni-921639 kubelet[1285]: I1229 07:59:26.441017    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb67aae3-6fe1-4c26-b760-5b1ba067fd96-xtables-lock\") pod \"kube-proxy-z62qd\" (UID: \"bb67aae3-6fe1-4c26-b760-5b1ba067fd96\") " pod="kube-system/kube-proxy-z62qd"
	Dec 29 07:59:26 newest-cni-921639 kubelet[1285]: I1229 07:59:26.639462    1285 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 29 07:59:26 newest-cni-921639 kubelet[1285]: E1229 07:59:26.898363    1285 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-921639" containerName="kube-controller-manager"
	Dec 29 07:59:27 newest-cni-921639 kubelet[1285]: E1229 07:59:27.168497    1285 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-921639" containerName="kube-apiserver"
	Dec 29 07:59:27 newest-cni-921639 kubelet[1285]: E1229 07:59:27.543311    1285 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 29 07:59:27 newest-cni-921639 kubelet[1285]: E1229 07:59:27.543423    1285 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb67aae3-6fe1-4c26-b760-5b1ba067fd96-kube-proxy podName:bb67aae3-6fe1-4c26-b760-5b1ba067fd96 nodeName:}" failed. No retries permitted until 2025-12-29 07:59:28.04339719 +0000 UTC m=+7.084179576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/bb67aae3-6fe1-4c26-b760-5b1ba067fd96-kube-proxy") pod "kube-proxy-z62qd" (UID: "bb67aae3-6fe1-4c26-b760-5b1ba067fd96") : failed to sync configmap cache: timed out waiting for the condition
	Dec 29 07:59:27 newest-cni-921639 kubelet[1285]: E1229 07:59:27.585697    1285 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-921639" containerName="kube-scheduler"
	Dec 29 07:59:28 newest-cni-921639 kubelet[1285]: W1229 07:59:28.242880    1285 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/crio-796d527ad6becd52174baeb1e3f22e1c67ab9e2f07286019d185f4da1e4542eb WatchSource:0}: Error finding container 796d527ad6becd52174baeb1e3f22e1c67ab9e2f07286019d185f4da1e4542eb: Status 404 returned error can't find the container with id 796d527ad6becd52174baeb1e3f22e1c67ab9e2f07286019d185f4da1e4542eb
	Dec 29 07:59:30 newest-cni-921639 kubelet[1285]: E1229 07:59:30.098832    1285 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-921639" containerName="etcd"
	Dec 29 07:59:30 newest-cni-921639 kubelet[1285]: I1229 07:59:30.112697    1285 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-z62qd" podStartSLOduration=5.112681107 podStartE2EDuration="5.112681107s" podCreationTimestamp="2025-12-29 07:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 07:59:28.48216263 +0000 UTC m=+7.522945015" watchObservedRunningTime="2025-12-29 07:59:30.112681107 +0000 UTC m=+9.153463493"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-921639 -n newest-cni-921639
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-921639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-t9fsk storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-921639 describe pod coredns-7d764666f9-t9fsk storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-921639 describe pod coredns-7d764666f9-t9fsk storage-provisioner: exit status 1 (134.544725ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-t9fsk" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-921639 describe pod coredns-7d764666f9-t9fsk storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-921639 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-921639 --alsologtostderr -v=1: exit status 80 (1.959730342s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-921639 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:59:52.250035  955098 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:59:52.250519  955098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:59:52.250560  955098 out.go:374] Setting ErrFile to fd 2...
	I1229 07:59:52.250581  955098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:59:52.251314  955098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:59:52.251742  955098 out.go:368] Setting JSON to false
	I1229 07:59:52.251811  955098 mustload.go:66] Loading cluster: newest-cni-921639
	I1229 07:59:52.252562  955098 config.go:182] Loaded profile config "newest-cni-921639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:59:52.253346  955098 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:52.278245  955098 host.go:66] Checking if "newest-cni-921639" exists ...
	I1229 07:59:52.278570  955098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:59:52.338689  955098 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-29 07:59:52.329359118 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:59:52.339396  955098 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766979747-22353/minikube-v1.37.0-1766979747-22353-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766979747-22353-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-921639 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1229 07:59:52.344924  955098 out.go:179] * Pausing node newest-cni-921639 ... 
	I1229 07:59:52.348080  955098 host.go:66] Checking if "newest-cni-921639" exists ...
	I1229 07:59:52.348452  955098 ssh_runner.go:195] Run: systemctl --version
	I1229 07:59:52.348503  955098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:52.367587  955098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:52.497071  955098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:59:52.514397  955098 pause.go:52] kubelet running: true
	I1229 07:59:52.514478  955098 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:59:52.800482  955098 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:59:52.800580  955098 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:59:52.875483  955098 cri.go:96] found id: "eb6b1f8340a2793c431a556be006c047660f2b1a13a4a368bc5de8e94e86e805"
	I1229 07:59:52.875504  955098 cri.go:96] found id: "956b518531f5e54ab1ef0f9da1ad3ef9a2e1134881091fb152908303d50bc669"
	I1229 07:59:52.875509  955098 cri.go:96] found id: "b478f4fb8f2f1eb02903b42ebfed7b093318e85195cad68adf0b4498eb30c441"
	I1229 07:59:52.875512  955098 cri.go:96] found id: "9a7655a59d3f525f641f92d24707bbb399020aacfccb7c1c32b929dd55619786"
	I1229 07:59:52.875515  955098 cri.go:96] found id: "3bea69a09373dddc5016d9d93b0bb2f3ed935c368464d98c65203289bfd93e0e"
	I1229 07:59:52.875518  955098 cri.go:96] found id: "751efea4e17c81bf50e9481d7316fc9841da1e69827a9a6545c3f9caeff8885c"
	I1229 07:59:52.875521  955098 cri.go:96] found id: ""
	I1229 07:59:52.875571  955098 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:59:52.895137  955098 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:59:52Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:59:53.147640  955098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:59:53.161188  955098 pause.go:52] kubelet running: false
	I1229 07:59:53.161256  955098 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:59:53.346860  955098 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:59:53.346950  955098 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:59:53.472489  955098 cri.go:96] found id: "eb6b1f8340a2793c431a556be006c047660f2b1a13a4a368bc5de8e94e86e805"
	I1229 07:59:53.472512  955098 cri.go:96] found id: "956b518531f5e54ab1ef0f9da1ad3ef9a2e1134881091fb152908303d50bc669"
	I1229 07:59:53.472519  955098 cri.go:96] found id: "b478f4fb8f2f1eb02903b42ebfed7b093318e85195cad68adf0b4498eb30c441"
	I1229 07:59:53.472523  955098 cri.go:96] found id: "9a7655a59d3f525f641f92d24707bbb399020aacfccb7c1c32b929dd55619786"
	I1229 07:59:53.472527  955098 cri.go:96] found id: "3bea69a09373dddc5016d9d93b0bb2f3ed935c368464d98c65203289bfd93e0e"
	I1229 07:59:53.472531  955098 cri.go:96] found id: "751efea4e17c81bf50e9481d7316fc9841da1e69827a9a6545c3f9caeff8885c"
	I1229 07:59:53.472534  955098 cri.go:96] found id: ""
	I1229 07:59:53.472585  955098 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:59:53.862412  955098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:59:53.875535  955098 pause.go:52] kubelet running: false
	I1229 07:59:53.875615  955098 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 07:59:54.029498  955098 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 07:59:54.029591  955098 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 07:59:54.118376  955098 cri.go:96] found id: "eb6b1f8340a2793c431a556be006c047660f2b1a13a4a368bc5de8e94e86e805"
	I1229 07:59:54.118397  955098 cri.go:96] found id: "956b518531f5e54ab1ef0f9da1ad3ef9a2e1134881091fb152908303d50bc669"
	I1229 07:59:54.118402  955098 cri.go:96] found id: "b478f4fb8f2f1eb02903b42ebfed7b093318e85195cad68adf0b4498eb30c441"
	I1229 07:59:54.118421  955098 cri.go:96] found id: "9a7655a59d3f525f641f92d24707bbb399020aacfccb7c1c32b929dd55619786"
	I1229 07:59:54.118426  955098 cri.go:96] found id: "3bea69a09373dddc5016d9d93b0bb2f3ed935c368464d98c65203289bfd93e0e"
	I1229 07:59:54.118430  955098 cri.go:96] found id: "751efea4e17c81bf50e9481d7316fc9841da1e69827a9a6545c3f9caeff8885c"
	I1229 07:59:54.118434  955098 cri.go:96] found id: ""
	I1229 07:59:54.118486  955098 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 07:59:54.134615  955098 out.go:203] 
	W1229 07:59:54.137475  955098 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:59:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:59:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 07:59:54.137498  955098 out.go:285] * 
	* 
	W1229 07:59:54.141365  955098 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:59:54.145161  955098 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-921639 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-921639
helpers_test.go:244: (dbg) docker inspect newest-cni-921639:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6",
	        "Created": "2025-12-29T07:59:00.549488049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 953184,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:59:35.242252977Z",
	            "FinishedAt": "2025-12-29T07:59:34.219215095Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/hosts",
	        "LogPath": "/var/lib/docker/containers/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6-json.log",
	        "Name": "/newest-cni-921639",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-921639:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-921639",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6",
	                "LowerDir": "/var/lib/docker/overlay2/9dd7621e42924d787c12dd5a7189cc8c6028b6914bc99cd6333f2ca0ee5cc9ba-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9dd7621e42924d787c12dd5a7189cc8c6028b6914bc99cd6333f2ca0ee5cc9ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9dd7621e42924d787c12dd5a7189cc8c6028b6914bc99cd6333f2ca0ee5cc9ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9dd7621e42924d787c12dd5a7189cc8c6028b6914bc99cd6333f2ca0ee5cc9ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-921639",
	                "Source": "/var/lib/docker/volumes/newest-cni-921639/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-921639",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-921639",
	                "name.minikube.sigs.k8s.io": "newest-cni-921639",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fa8a827c3ec8bf9d12c96672467c1c119b01baf7e8e8405bfacffed5a1122c3f",
	            "SandboxKey": "/var/run/docker/netns/fa8a827c3ec8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33858"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33859"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33862"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33860"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33861"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-921639": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:9a:be:e9:29:79",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0c6a7ca1056e7338b014b856b3d02e72a65fdee86fa30ab64b1ed8a6a45ad6d1",
	                    "EndpointID": "46a952997ec6148cdd2ea0bfa9fe576e6e95dc48b11a8a891f8e7e836d278e13",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-921639",
	                        "829fd18972bc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-921639 -n newest-cni-921639
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-921639 -n newest-cni-921639: exit status 2 (342.814606ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-921639 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-921639 logs -n 25: (1.16645768s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-664524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │                     │
	│ stop    │ -p embed-certs-664524 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-664524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:58 UTC │
	│ ssh     │ force-systemd-flag-122604 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-122604    │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ delete  │ -p force-systemd-flag-122604                                                                                                                                                                                                                  │ force-systemd-flag-122604    │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ delete  │ -p disable-driver-mounts-062900                                                                                                                                                                                                               │ disable-driver-mounts-062900 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ start   │ -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ image   │ embed-certs-664524 image list --format=json                                                                                                                                                                                                   │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ pause   │ -p embed-certs-664524 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ delete  │ -p embed-certs-664524                                                                                                                                                                                                                         │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ delete  │ -p embed-certs-664524                                                                                                                                                                                                                         │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ start   │ -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-358521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-358521 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-358521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ start   │ -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-921639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │                     │
	│ stop    │ -p newest-cni-921639 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable dashboard -p newest-cni-921639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ start   │ -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ image   │ newest-cni-921639 image list --format=json                                                                                                                                                                                                    │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ pause   │ -p newest-cni-921639 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:59:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:59:34.867755  953059 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:59:34.867955  953059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:59:34.867984  953059 out.go:374] Setting ErrFile to fd 2...
	I1229 07:59:34.868003  953059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:59:34.868389  953059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:59:34.869019  953059 out.go:368] Setting JSON to false
	I1229 07:59:34.870241  953059 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16927,"bootTime":1766978248,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:59:34.870358  953059 start.go:143] virtualization:  
	I1229 07:59:34.872208  953059 out.go:179] * [newest-cni-921639] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:59:34.873542  953059 notify.go:221] Checking for updates...
	I1229 07:59:34.873987  953059 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:59:34.875295  953059 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:59:34.881954  953059 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:59:34.883067  953059 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:59:34.884310  953059 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:59:34.885396  953059 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:59:34.887691  953059 config.go:182] Loaded profile config "newest-cni-921639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:59:34.888277  953059 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:59:34.925640  953059 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:59:34.925756  953059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:59:35.058251  953059 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:59:35.040632708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:59:35.058378  953059 docker.go:319] overlay module found
	I1229 07:59:35.059745  953059 out.go:179] * Using the docker driver based on existing profile
	I1229 07:59:35.060898  953059 start.go:309] selected driver: docker
	I1229 07:59:35.060915  953059 start.go:928] validating driver "docker" against &{Name:newest-cni-921639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-921639 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:59:35.061012  953059 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:59:35.061782  953059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:59:35.169449  953059 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:59:35.155265822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:59:35.169787  953059 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1229 07:59:35.169805  953059 cni.go:84] Creating CNI manager for ""
	I1229 07:59:35.169865  953059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:59:35.169897  953059 start.go:353] cluster config:
	{Name:newest-cni-921639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-921639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:59:35.171268  953059 out.go:179] * Starting "newest-cni-921639" primary control-plane node in "newest-cni-921639" cluster
	I1229 07:59:35.172378  953059 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:59:35.173563  953059 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:59:35.174674  953059 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:59:35.174717  953059 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:59:35.174725  953059 cache.go:65] Caching tarball of preloaded images
	I1229 07:59:35.174807  953059 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:59:35.174817  953059 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:59:35.174942  953059 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/config.json ...
	I1229 07:59:35.175155  953059 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:59:35.197754  953059 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:59:35.197779  953059 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:59:35.197794  953059 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:59:35.197831  953059 start.go:360] acquireMachinesLock for newest-cni-921639: {Name:mkc8472acf578a41625391ce0b888363af994d9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:59:35.197891  953059 start.go:364] duration metric: took 37.244µs to acquireMachinesLock for "newest-cni-921639"
	I1229 07:59:35.197915  953059 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:59:35.197924  953059 fix.go:54] fixHost starting: 
	I1229 07:59:35.198220  953059 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:35.215275  953059 fix.go:112] recreateIfNeeded on newest-cni-921639: state=Stopped err=<nil>
	W1229 07:59:35.215307  953059 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:59:33.835187  950174 addons.go:530] duration metric: took 9.158837049s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1229 07:59:33.845466  950174 api_server.go:325] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1229 07:59:33.848945  950174 api_server.go:141] control plane version: v1.35.0
	I1229 07:59:33.848982  950174 api_server.go:131] duration metric: took 22.414845ms to wait for apiserver health ...
	I1229 07:59:33.848992  950174 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:59:33.856382  950174 system_pods.go:59] 8 kube-system pods found
	I1229 07:59:33.856428  950174 system_pods.go:61] "coredns-7d764666f9-jv9qg" [d90c5059-2a03-4f81-ae2f-d4b1096b9980] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:59:33.856479  950174 system_pods.go:61] "etcd-default-k8s-diff-port-358521" [0eee75ac-1aee-4ac1-8cfa-bbd734dec788] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:59:33.856495  950174 system_pods.go:61] "kindnet-x9pdb" [eb9ce56b-a80b-48b7-9580-263fdc75601b] Running
	I1229 07:59:33.856504  950174 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-358521" [5cc78ab0-d29f-41cb-9caa-cffd2e0b9e00] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:59:33.856519  950174 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-358521" [4fde9a1a-3130-4858-bf54-aa0a01a5ee12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:59:33.856542  950174 system_pods.go:61] "kube-proxy-58t84" [614cc100-008a-4997-ac58-cc1b29575b11] Running
	I1229 07:59:33.856555  950174 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-358521" [5fa106f8-bdcc-4d88-b4cd-b6ce4678e5cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:59:33.856560  950174 system_pods.go:61] "storage-provisioner" [3a0877c5-d210-4e03-947b-f418cf49454a] Running
	I1229 07:59:33.856578  950174 system_pods.go:74] duration metric: took 7.578906ms to wait for pod list to return data ...
	I1229 07:59:33.856592  950174 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:59:33.865709  950174 default_sa.go:45] found service account: "default"
	I1229 07:59:33.865741  950174 default_sa.go:55] duration metric: took 9.143021ms for default service account to be created ...
	I1229 07:59:33.865752  950174 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:59:33.870461  950174 system_pods.go:86] 8 kube-system pods found
	I1229 07:59:33.870504  950174 system_pods.go:89] "coredns-7d764666f9-jv9qg" [d90c5059-2a03-4f81-ae2f-d4b1096b9980] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:59:33.870515  950174 system_pods.go:89] "etcd-default-k8s-diff-port-358521" [0eee75ac-1aee-4ac1-8cfa-bbd734dec788] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:59:33.870557  950174 system_pods.go:89] "kindnet-x9pdb" [eb9ce56b-a80b-48b7-9580-263fdc75601b] Running
	I1229 07:59:33.870575  950174 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-358521" [5cc78ab0-d29f-41cb-9caa-cffd2e0b9e00] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:59:33.870584  950174 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-358521" [4fde9a1a-3130-4858-bf54-aa0a01a5ee12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:59:33.870593  950174 system_pods.go:89] "kube-proxy-58t84" [614cc100-008a-4997-ac58-cc1b29575b11] Running
	I1229 07:59:33.870601  950174 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-358521" [5fa106f8-bdcc-4d88-b4cd-b6ce4678e5cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:59:33.870636  950174 system_pods.go:89] "storage-provisioner" [3a0877c5-d210-4e03-947b-f418cf49454a] Running
	I1229 07:59:33.870650  950174 system_pods.go:126] duration metric: took 4.892945ms to wait for k8s-apps to be running ...
	I1229 07:59:33.870659  950174 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:59:33.870733  950174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:59:33.899945  950174 system_svc.go:56] duration metric: took 29.275961ms WaitForService to wait for kubelet
	I1229 07:59:33.900017  950174 kubeadm.go:587] duration metric: took 9.224067308s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:59:33.900053  950174 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:59:33.910021  950174 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:59:33.910108  950174 node_conditions.go:123] node cpu capacity is 2
	I1229 07:59:33.910141  950174 node_conditions.go:105] duration metric: took 10.063725ms to run NodePressure ...
	I1229 07:59:33.910169  950174 start.go:242] waiting for startup goroutines ...
	I1229 07:59:33.910195  950174 start.go:247] waiting for cluster config update ...
	I1229 07:59:33.910223  950174 start.go:256] writing updated cluster config ...
	I1229 07:59:33.910544  950174 ssh_runner.go:195] Run: rm -f paused
	I1229 07:59:33.917022  950174 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:59:33.929445  950174 pod_ready.go:83] waiting for pod "coredns-7d764666f9-jv9qg" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:59:35.216869  953059 out.go:252] * Restarting existing docker container for "newest-cni-921639" ...
	I1229 07:59:35.216974  953059 cli_runner.go:164] Run: docker start newest-cni-921639
	I1229 07:59:35.485988  953059 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:35.510500  953059 kic.go:430] container "newest-cni-921639" state is running.
	I1229 07:59:35.510886  953059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-921639
	I1229 07:59:35.538208  953059 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/config.json ...
	I1229 07:59:35.538439  953059 machine.go:94] provisionDockerMachine start ...
	I1229 07:59:35.538495  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:35.561860  953059 main.go:144] libmachine: Using SSH client type: native
	I1229 07:59:35.562196  953059 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33858 <nil> <nil>}
	I1229 07:59:35.562204  953059 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:59:35.562792  953059 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57780->127.0.0.1:33858: read: connection reset by peer
	I1229 07:59:38.729307  953059 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-921639
	
	I1229 07:59:38.729331  953059 ubuntu.go:182] provisioning hostname "newest-cni-921639"
	I1229 07:59:38.729407  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:38.753616  953059 main.go:144] libmachine: Using SSH client type: native
	I1229 07:59:38.753934  953059 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33858 <nil> <nil>}
	I1229 07:59:38.753956  953059 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-921639 && echo "newest-cni-921639" | sudo tee /etc/hostname
	I1229 07:59:38.944525  953059 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-921639
	
	I1229 07:59:38.944631  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:38.975589  953059 main.go:144] libmachine: Using SSH client type: native
	I1229 07:59:38.975920  953059 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33858 <nil> <nil>}
	I1229 07:59:38.975947  953059 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-921639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-921639/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-921639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:59:39.141407  953059 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:59:39.141431  953059 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:59:39.141461  953059 ubuntu.go:190] setting up certificates
	I1229 07:59:39.141469  953059 provision.go:84] configureAuth start
	I1229 07:59:39.141531  953059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-921639
	I1229 07:59:39.165031  953059 provision.go:143] copyHostCerts
	I1229 07:59:39.165117  953059 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:59:39.165135  953059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:59:39.165202  953059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:59:39.165290  953059 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:59:39.165295  953059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:59:39.165322  953059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:59:39.165385  953059 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:59:39.165389  953059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:59:39.165413  953059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:59:39.165473  953059 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.newest-cni-921639 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-921639]
	I1229 07:59:39.418921  953059 provision.go:177] copyRemoteCerts
	I1229 07:59:39.419023  953059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:59:39.419091  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:39.442933  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:39.548849  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:59:39.569473  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:59:39.590644  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:59:39.614078  953059 provision.go:87] duration metric: took 472.587274ms to configureAuth
	I1229 07:59:39.614107  953059 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:59:39.614297  953059 config.go:182] Loaded profile config "newest-cni-921639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:59:39.614405  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:39.631699  953059 main.go:144] libmachine: Using SSH client type: native
	I1229 07:59:39.632021  953059 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33858 <nil> <nil>}
	I1229 07:59:39.632042  953059 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1229 07:59:35.937370  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	W1229 07:59:38.434637  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	I1229 07:59:40.051232  953059 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:59:40.051256  953059 machine.go:97] duration metric: took 4.512807084s to provisionDockerMachine
	I1229 07:59:40.051268  953059 start.go:293] postStartSetup for "newest-cni-921639" (driver="docker")
	I1229 07:59:40.051279  953059 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:59:40.051355  953059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:59:40.051403  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:40.075514  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:40.185692  953059 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:59:40.189428  953059 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:59:40.189459  953059 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:59:40.189471  953059 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:59:40.189527  953059 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:59:40.189608  953059 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:59:40.189710  953059 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:59:40.198089  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:59:40.218062  953059 start.go:296] duration metric: took 166.777547ms for postStartSetup
	I1229 07:59:40.218153  953059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:59:40.218198  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:40.236638  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:40.342496  953059 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:59:40.347547  953059 fix.go:56] duration metric: took 5.149614687s for fixHost
	I1229 07:59:40.347575  953059 start.go:83] releasing machines lock for "newest-cni-921639", held for 5.149670885s
	I1229 07:59:40.347644  953059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-921639
	I1229 07:59:40.370797  953059 ssh_runner.go:195] Run: cat /version.json
	I1229 07:59:40.370853  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:40.371051  953059 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:59:40.371110  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:40.410145  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:40.416380  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:40.625464  953059 ssh_runner.go:195] Run: systemctl --version
	I1229 07:59:40.632606  953059 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:59:40.692116  953059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:59:40.697332  953059 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:59:40.697407  953059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:59:40.706048  953059 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:59:40.706076  953059 start.go:496] detecting cgroup driver to use...
	I1229 07:59:40.706109  953059 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:59:40.706164  953059 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:59:40.722466  953059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:59:40.741806  953059 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:59:40.741870  953059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:59:40.759130  953059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:59:40.773403  953059 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:59:40.905287  953059 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:59:41.064715  953059 docker.go:234] disabling docker service ...
	I1229 07:59:41.064890  953059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:59:41.084109  953059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:59:41.097949  953059 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:59:41.245592  953059 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:59:41.392435  953059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:59:41.409773  953059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:59:41.425386  953059 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:59:41.425449  953059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.443197  953059 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:59:41.443266  953059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.464457  953059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.483876  953059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.496773  953059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:59:41.508980  953059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.521815  953059 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.534890  953059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.548657  953059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:59:41.557543  953059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:59:41.566455  953059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:59:41.748779  953059 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:59:42.420437  953059 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:59:42.420510  953059 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:59:42.425485  953059 start.go:574] Will wait 60s for crictl version
	I1229 07:59:42.425628  953059 ssh_runner.go:195] Run: which crictl
	I1229 07:59:42.433128  953059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:59:42.471529  953059 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:59:42.471611  953059 ssh_runner.go:195] Run: crio --version
	I1229 07:59:42.527379  953059 ssh_runner.go:195] Run: crio --version
	I1229 07:59:42.578032  953059 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:59:42.581088  953059 cli_runner.go:164] Run: docker network inspect newest-cni-921639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:59:42.606812  953059 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:59:42.615364  953059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:59:42.633970  953059 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1229 07:59:42.637066  953059 kubeadm.go:884] updating cluster {Name:newest-cni-921639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-921639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:59:42.637363  953059 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:59:42.637441  953059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:59:42.699353  953059 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:59:42.699420  953059 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:59:42.699506  953059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:59:42.756864  953059 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:59:42.756894  953059 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:59:42.756903  953059 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1229 07:59:42.757003  953059 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-921639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-921639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:59:42.757105  953059 ssh_runner.go:195] Run: crio config
	I1229 07:59:42.899664  953059 cni.go:84] Creating CNI manager for ""
	I1229 07:59:42.899742  953059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:59:42.899775  953059 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1229 07:59:42.899839  953059 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-921639 NodeName:newest-cni-921639 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:59:42.900024  953059 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-921639"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:59:42.900147  953059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:59:42.909217  953059 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:59:42.909346  953059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:59:42.917840  953059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1229 07:59:42.933310  953059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:59:42.949393  953059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1229 07:59:42.963030  953059 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:59:42.967051  953059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:59:42.976655  953059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:59:43.183900  953059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:59:43.214199  953059 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639 for IP: 192.168.85.2
	I1229 07:59:43.214269  953059 certs.go:195] generating shared ca certs ...
	I1229 07:59:43.214300  953059 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:43.214488  953059 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:59:43.214569  953059 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:59:43.214608  953059 certs.go:257] generating profile certs ...
	I1229 07:59:43.214742  953059 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/client.key
	I1229 07:59:43.214871  953059 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/apiserver.key.6bbd8bf7
	I1229 07:59:43.214961  953059 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/proxy-client.key
	I1229 07:59:43.215130  953059 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:59:43.215193  953059 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:59:43.215220  953059 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:59:43.215284  953059 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:59:43.215345  953059 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:59:43.215409  953059 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:59:43.215506  953059 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:59:43.216119  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:59:43.301078  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:59:43.355692  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:59:43.383398  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:59:43.428506  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:59:43.473682  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1229 07:59:43.531671  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:59:43.581101  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:59:43.638253  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:59:43.680960  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:59:43.716203  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:59:43.742745  953059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:59:43.761185  953059 ssh_runner.go:195] Run: openssl version
	I1229 07:59:43.771799  953059 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:59:43.781443  953059 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:59:43.797517  953059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:59:43.802290  953059 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:59:43.802353  953059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:59:43.854614  953059 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:59:43.866403  953059 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:59:43.875432  953059 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:59:43.884164  953059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:59:43.888663  953059 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:59:43.888728  953059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:59:43.938940  953059 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:59:43.947777  953059 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:59:43.956401  953059 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:59:43.968332  953059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:59:43.974049  953059 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:59:43.974127  953059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:59:44.031702  953059 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:59:44.045819  953059 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:59:44.049842  953059 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:59:44.099222  953059 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:59:44.212163  953059 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:59:44.387583  953059 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:59:44.531109  953059 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:59:44.626367  953059 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:59:44.722261  953059 kubeadm.go:401] StartCluster: {Name:newest-cni-921639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-921639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:59:44.722365  953059 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:59:44.722452  953059 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:59:44.783995  953059 cri.go:96] found id: "b478f4fb8f2f1eb02903b42ebfed7b093318e85195cad68adf0b4498eb30c441"
	I1229 07:59:44.784019  953059 cri.go:96] found id: "9a7655a59d3f525f641f92d24707bbb399020aacfccb7c1c32b929dd55619786"
	I1229 07:59:44.784023  953059 cri.go:96] found id: "3bea69a09373dddc5016d9d93b0bb2f3ed935c368464d98c65203289bfd93e0e"
	I1229 07:59:44.784027  953059 cri.go:96] found id: "751efea4e17c81bf50e9481d7316fc9841da1e69827a9a6545c3f9caeff8885c"
	I1229 07:59:44.784031  953059 cri.go:96] found id: ""
	I1229 07:59:44.784087  953059 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:59:44.809096  953059 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:59:44Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:59:44.809167  953059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:59:44.829218  953059 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:59:44.829239  953059 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:59:44.829305  953059 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:59:44.847192  953059 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:59:44.847766  953059 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-921639" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:59:44.848023  953059 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-739997/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-921639" cluster setting kubeconfig missing "newest-cni-921639" context setting]
	I1229 07:59:44.848451  953059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:44.849838  953059 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:59:44.885596  953059 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1229 07:59:44.885688  953059 kubeadm.go:602] duration metric: took 56.442287ms to restartPrimaryControlPlane
	I1229 07:59:44.885714  953059 kubeadm.go:403] duration metric: took 163.461774ms to StartCluster
	I1229 07:59:44.885761  953059 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:44.885870  953059 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:59:44.886815  953059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:44.887293  953059 config.go:182] Loaded profile config "newest-cni-921639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:59:44.887378  953059 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:59:44.887424  953059 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:59:44.887646  953059 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-921639"
	I1229 07:59:44.887677  953059 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-921639"
	W1229 07:59:44.887698  953059 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:59:44.887752  953059 host.go:66] Checking if "newest-cni-921639" exists ...
	I1229 07:59:44.888273  953059 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:44.888737  953059 addons.go:70] Setting dashboard=true in profile "newest-cni-921639"
	I1229 07:59:44.888767  953059 addons.go:239] Setting addon dashboard=true in "newest-cni-921639"
	W1229 07:59:44.888775  953059 addons.go:248] addon dashboard should already be in state true
	I1229 07:59:44.888817  953059 host.go:66] Checking if "newest-cni-921639" exists ...
	I1229 07:59:44.889285  953059 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:44.893606  953059 out.go:179] * Verifying Kubernetes components...
	I1229 07:59:44.893609  953059 addons.go:70] Setting default-storageclass=true in profile "newest-cni-921639"
	I1229 07:59:44.893712  953059 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-921639"
	I1229 07:59:44.894050  953059 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:44.897938  953059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:59:44.943646  953059 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:59:44.945966  953059 addons.go:239] Setting addon default-storageclass=true in "newest-cni-921639"
	W1229 07:59:44.945984  953059 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:59:44.946007  953059 host.go:66] Checking if "newest-cni-921639" exists ...
	I1229 07:59:44.946424  953059 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:44.948900  953059 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:59:44.948919  953059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:59:44.948979  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:44.964824  953059 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:59:44.967873  953059 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1229 07:59:40.439714  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	W1229 07:59:42.448753  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	W1229 07:59:44.944014  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	I1229 07:59:44.975297  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:59:44.975324  953059 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:59:44.975390  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:45.002710  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:45.002890  953059 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:59:45.002907  953059 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:59:45.002975  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:45.020957  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:45.057801  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:45.414297  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:59:45.414323  953059 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:59:45.500447  953059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:59:45.512346  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:59:45.512374  953059 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:59:45.540875  953059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:59:45.545947  953059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:59:45.589435  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:59:45.589464  953059 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:59:45.720433  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:59:45.720496  953059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:59:45.806499  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:59:45.806525  953059 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1229 07:59:45.916443  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:59:45.916480  953059 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:59:45.963923  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:59:45.963954  953059 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:59:45.985889  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:59:45.985911  953059 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:59:46.012580  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:59:46.012622  953059 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:59:46.039169  953059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:59:49.790654  953059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.290160687s)
	W1229 07:59:47.435675  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	W1229 07:59:49.935203  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	I1229 07:59:50.855308  953059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.314396505s)
	I1229 07:59:50.855388  953059 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.30934721s)
	I1229 07:59:50.855432  953059 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:59:50.855496  953059 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:59:50.888258  953059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.849033974s)
	I1229 07:59:50.891363  953059 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-921639 addons enable metrics-server
	
	I1229 07:59:50.893019  953059 api_server.go:72] duration metric: took 6.005521162s to wait for apiserver process to appear ...
	I1229 07:59:50.893100  953059 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:59:50.893152  953059 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:59:50.899438  953059 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1229 07:59:50.902478  953059 addons.go:530] duration metric: took 6.015044314s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1229 07:59:50.905509  953059 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:59:50.905578  953059 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:59:51.394183  953059 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:59:51.402386  953059 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1229 07:59:51.403594  953059 api_server.go:141] control plane version: v1.35.0
	I1229 07:59:51.403621  953059 api_server.go:131] duration metric: took 510.481935ms to wait for apiserver health ...
	I1229 07:59:51.403631  953059 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:59:51.407310  953059 system_pods.go:59] 8 kube-system pods found
	I1229 07:59:51.407357  953059 system_pods.go:61] "coredns-7d764666f9-t9fsk" [a6a64d27-c39d-4289-ad6c-240e23b61c7c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1229 07:59:51.407368  953059 system_pods.go:61] "etcd-newest-cni-921639" [f5f342fa-4ca0-479a-aa4d-2cdc7f5d0a2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:59:51.407377  953059 system_pods.go:61] "kindnet-lbs8p" [1e342362-a751-4336-93d1-a1bdb3c1f16a] Running
	I1229 07:59:51.407385  953059 system_pods.go:61] "kube-apiserver-newest-cni-921639" [dd73ba12-116f-4275-9d77-4e5f5f761657] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:59:51.407398  953059 system_pods.go:61] "kube-controller-manager-newest-cni-921639" [de7abdd9-1e08-4a9f-a47d-78f714e98094] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:59:51.407405  953059 system_pods.go:61] "kube-proxy-z62qd" [bb67aae3-6fe1-4c26-b760-5b1ba067fd96] Running
	I1229 07:59:51.407415  953059 system_pods.go:61] "kube-scheduler-newest-cni-921639" [9bcb6bda-4afb-4736-873f-90a2fdc42008] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:59:51.407427  953059 system_pods.go:61] "storage-provisioner" [2be6bd44-f3fd-4b78-9d33-27b7c3452b52] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1229 07:59:51.407438  953059 system_pods.go:74] duration metric: took 3.800514ms to wait for pod list to return data ...
	I1229 07:59:51.407446  953059 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:59:51.409832  953059 default_sa.go:45] found service account: "default"
	I1229 07:59:51.409852  953059 default_sa.go:55] duration metric: took 2.400347ms for default service account to be created ...
	I1229 07:59:51.409867  953059 kubeadm.go:587] duration metric: took 6.522370155s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1229 07:59:51.409883  953059 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:59:51.412293  953059 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:59:51.412317  953059 node_conditions.go:123] node cpu capacity is 2
	I1229 07:59:51.412330  953059 node_conditions.go:105] duration metric: took 2.441423ms to run NodePressure ...
	I1229 07:59:51.412342  953059 start.go:242] waiting for startup goroutines ...
	I1229 07:59:51.412349  953059 start.go:247] waiting for cluster config update ...
	I1229 07:59:51.412360  953059 start.go:256] writing updated cluster config ...
	I1229 07:59:51.412652  953059 ssh_runner.go:195] Run: rm -f paused
	I1229 07:59:51.479035  953059 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1229 07:59:51.482606  953059 out.go:203] 
	W1229 07:59:51.486673  953059 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1229 07:59:51.490759  953059 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:59:51.493679  953059 out.go:179] * Done! kubectl is now configured to use "newest-cni-921639" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.941587477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.950097657Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=10f7456e-cd6a-46e1-95ca-45e560a5fb77 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.959208834Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-z62qd/POD" id=92308c2e-9182-4c12-9209-6bfa21e0b00f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.959355329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.96375529Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=92308c2e-9182-4c12-9209-6bfa21e0b00f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.980212288Z" level=info msg="Ran pod sandbox 9be75efe6ce7ee16e1c43d96eddce80c944e3e3fee14addcd3d0c1634e24f1e2 with infra container: kube-system/kindnet-lbs8p/POD" id=10f7456e-cd6a-46e1-95ca-45e560a5fb77 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.98575433Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=001e0e31-2360-4e8c-a444-44de878fee96 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.987168363Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=6b8ebebe-2df4-4140-9741-442f235b0bee name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.991713613Z" level=info msg="Creating container: kube-system/kindnet-lbs8p/kindnet-cni" id=3fd13d6d-fb65-474f-bb4b-db44520a5892 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.991911203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.996569578Z" level=info msg="Ran pod sandbox 4be9f1f4164cdc5deb7cd9650d79daae3b70af6606aff3c68bac7a70f57cdd9e with infra container: kube-system/kube-proxy-z62qd/POD" id=92308c2e-9182-4c12-9209-6bfa21e0b00f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.00310265Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=e7e447c2-4c3b-4bc8-97c2-d2b2929074f6 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.006187507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.008889691Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=57f2f8bc-3f02-4886-9b62-c50ff44a4ffb name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.010466581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.013251111Z" level=info msg="Creating container: kube-system/kube-proxy-z62qd/kube-proxy" id=f5b299f8-fadd-4fad-bf66-bdc9d59ba4a5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.013598618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.03493316Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.038373868Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.060666941Z" level=info msg="Created container 956b518531f5e54ab1ef0f9da1ad3ef9a2e1134881091fb152908303d50bc669: kube-system/kindnet-lbs8p/kindnet-cni" id=3fd13d6d-fb65-474f-bb4b-db44520a5892 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.061675113Z" level=info msg="Starting container: 956b518531f5e54ab1ef0f9da1ad3ef9a2e1134881091fb152908303d50bc669" id=0344d078-06cc-4f7f-96cb-52d2bdc8bdb3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.064122878Z" level=info msg="Started container" PID=1072 containerID=956b518531f5e54ab1ef0f9da1ad3ef9a2e1134881091fb152908303d50bc669 description=kube-system/kindnet-lbs8p/kindnet-cni id=0344d078-06cc-4f7f-96cb-52d2bdc8bdb3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9be75efe6ce7ee16e1c43d96eddce80c944e3e3fee14addcd3d0c1634e24f1e2
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.111939729Z" level=info msg="Created container eb6b1f8340a2793c431a556be006c047660f2b1a13a4a368bc5de8e94e86e805: kube-system/kube-proxy-z62qd/kube-proxy" id=f5b299f8-fadd-4fad-bf66-bdc9d59ba4a5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.112741385Z" level=info msg="Starting container: eb6b1f8340a2793c431a556be006c047660f2b1a13a4a368bc5de8e94e86e805" id=da789ac1-6f20-4901-b756-66fa699a4a72 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.121714509Z" level=info msg="Started container" PID=1079 containerID=eb6b1f8340a2793c431a556be006c047660f2b1a13a4a368bc5de8e94e86e805 description=kube-system/kube-proxy-z62qd/kube-proxy id=da789ac1-6f20-4901-b756-66fa699a4a72 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4be9f1f4164cdc5deb7cd9650d79daae3b70af6606aff3c68bac7a70f57cdd9e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	eb6b1f8340a27       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   5 seconds ago       Running             kube-proxy                1                   4be9f1f4164cd       kube-proxy-z62qd                            kube-system
	956b518531f5e       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   5 seconds ago       Running             kindnet-cni               1                   9be75efe6ce7e       kindnet-lbs8p                               kube-system
	b478f4fb8f2f1       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   10 seconds ago      Running             kube-apiserver            1                   30ae2a218a116       kube-apiserver-newest-cni-921639            kube-system
	9a7655a59d3f5       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   10 seconds ago      Running             kube-scheduler            1                   c886bfd5ad2e3       kube-scheduler-newest-cni-921639            kube-system
	3bea69a09373d       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   10 seconds ago      Running             etcd                      1                   7678483af5366       etcd-newest-cni-921639                      kube-system
	751efea4e17c8       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   10 seconds ago      Running             kube-controller-manager   1                   79bb0ff7e7bc1       kube-controller-manager-newest-cni-921639   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-921639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-921639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=newest-cni-921639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_59_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:59:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-921639
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:59:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:59:49 +0000   Mon, 29 Dec 2025 07:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:59:49 +0000   Mon, 29 Dec 2025 07:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:59:49 +0000   Mon, 29 Dec 2025 07:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 29 Dec 2025 07:59:49 +0000   Mon, 29 Dec 2025 07:59:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-921639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                9187b359-79d8-4272-9631-ead43f58bc0f
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-921639                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-lbs8p                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-921639             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-921639    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-z62qd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-921639             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node newest-cni-921639 event: Registered Node newest-cni-921639 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-921639 event: Registered Node newest-cni-921639 in Controller
	
	
	==> dmesg <==
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	[Dec29 07:54] overlayfs: idmapped layers are currently not supported
	[Dec29 07:55] overlayfs: idmapped layers are currently not supported
	[Dec29 07:56] overlayfs: idmapped layers are currently not supported
	[Dec29 07:57] overlayfs: idmapped layers are currently not supported
	[Dec29 07:58] overlayfs: idmapped layers are currently not supported
	[Dec29 07:59] overlayfs: idmapped layers are currently not supported
	[ +10.773618] overlayfs: idmapped layers are currently not supported
	[ +19.528466] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3bea69a09373dddc5016d9d93b0bb2f3ed935c368464d98c65203289bfd93e0e] <==
	{"level":"info","ts":"2025-12-29T07:59:44.789424Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-29T07:59:44.789486Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:59:44.789541Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:59:44.798888Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:59:44.798926Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:59:44.822032Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:59:44.822089Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:59:45.599425Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:45.599470Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:45.599530Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:45.599543Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:59:45.599564Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:45.604883Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:45.604930Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:59:45.604951Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:45.604961Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:45.621254Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-921639 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:59:45.621295Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:59:45.621587Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:59:45.622693Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:59:45.633528Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:59:45.635124Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:59:45.666277Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:59:45.697949Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:59:45.705698Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 07:59:55 up  4:42,  0 user,  load average: 5.03, 2.83, 2.51
	Linux newest-cni-921639 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [956b518531f5e54ab1ef0f9da1ad3ef9a2e1134881091fb152908303d50bc669] <==
	I1229 07:59:50.176976       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:59:50.177223       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:59:50.178506       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:59:50.178521       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:59:50.178533       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:59:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:59:50.455278       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:59:50.455382       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:59:50.455421       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:59:50.455584       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [b478f4fb8f2f1eb02903b42ebfed7b093318e85195cad68adf0b4498eb30c441] <==
	I1229 07:59:49.508530       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1229 07:59:49.508754       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:59:49.539951       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1229 07:59:49.540048       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1229 07:59:49.540193       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:49.540457       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:49.540516       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:49.540609       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:59:49.552202       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:59:49.686277       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:49.686298       1 policy_source.go:248] refreshing policies
	E1229 07:59:49.687103       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:59:49.760276       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:59:49.806461       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:59:50.093025       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:59:50.466117       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:59:50.581289       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:59:50.647164       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:59:50.673656       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:59:50.857193       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.0.215"}
	I1229 07:59:50.881282       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.99.89"}
	I1229 07:59:52.630341       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:59:52.927692       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:59:52.979046       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:59:53.133546       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [751efea4e17c81bf50e9481d7316fc9841da1e69827a9a6545c3f9caeff8885c] <==
	I1229 07:59:52.550103       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1229 07:59:52.550203       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-921639"
	I1229 07:59:52.550328       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1229 07:59:52.549773       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.551707       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.554276       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.557716       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.557760       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.557900       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.557921       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.557939       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.558039       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569016       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569187       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569229       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569273       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569337       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569726       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569807       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569819       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:59:52.569824       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:59:52.569901       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.576487       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.576701       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.613273       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [eb6b1f8340a2793c431a556be006c047660f2b1a13a4a368bc5de8e94e86e805] <==
	I1229 07:59:50.528652       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:59:50.738847       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:59:50.889219       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:50.889260       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1229 07:59:50.889342       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:59:50.956521       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:59:50.957011       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:59:50.967102       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:59:50.967511       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:59:50.967583       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:59:50.972407       1 config.go:200] "Starting service config controller"
	I1229 07:59:50.972492       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:59:50.972536       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:59:50.972568       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:59:50.972610       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:59:50.972641       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:59:50.972840       1 config.go:309] "Starting node config controller"
	I1229 07:59:50.972861       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:59:51.072978       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:59:51.073023       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:59:51.073070       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:59:51.073338       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [9a7655a59d3f525f641f92d24707bbb399020aacfccb7c1c32b929dd55619786] <==
	I1229 07:59:46.574537       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:59:49.249296       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:59:49.249325       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:59:49.249335       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:59:49.249352       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:59:49.388224       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:59:49.388255       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:59:49.402774       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:59:49.415600       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:59:49.417343       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:59:49.417521       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:59:49.720287       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: E1229 07:59:49.634447     740 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-921639" containerName="etcd"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: E1229 07:59:49.634698     740 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-921639" containerName="kube-controller-manager"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.720199     740 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.770199     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb67aae3-6fe1-4c26-b760-5b1ba067fd96-lib-modules\") pod \"kube-proxy-z62qd\" (UID: \"bb67aae3-6fe1-4c26-b760-5b1ba067fd96\") " pod="kube-system/kube-proxy-z62qd"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.770262     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e342362-a751-4336-93d1-a1bdb3c1f16a-lib-modules\") pod \"kindnet-lbs8p\" (UID: \"1e342362-a751-4336-93d1-a1bdb3c1f16a\") " pod="kube-system/kindnet-lbs8p"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.770297     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb67aae3-6fe1-4c26-b760-5b1ba067fd96-xtables-lock\") pod \"kube-proxy-z62qd\" (UID: \"bb67aae3-6fe1-4c26-b760-5b1ba067fd96\") " pod="kube-system/kube-proxy-z62qd"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.770330     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1e342362-a751-4336-93d1-a1bdb3c1f16a-cni-cfg\") pod \"kindnet-lbs8p\" (UID: \"1e342362-a751-4336-93d1-a1bdb3c1f16a\") " pod="kube-system/kindnet-lbs8p"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.770350     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e342362-a751-4336-93d1-a1bdb3c1f16a-xtables-lock\") pod \"kindnet-lbs8p\" (UID: \"1e342362-a751-4336-93d1-a1bdb3c1f16a\") " pod="kube-system/kindnet-lbs8p"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.825084     740 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.838793     740 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.838876     740 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.838905     740 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.840658     740 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: E1229 07:59:49.842993     740 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-921639\" already exists" pod="kube-system/kube-controller-manager-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.843031     740 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: E1229 07:59:49.865911     740 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-921639\" already exists" pod="kube-system/kube-scheduler-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.865948     740 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: E1229 07:59:49.887724     740 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-921639\" already exists" pod="kube-system/etcd-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.887760     740 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: E1229 07:59:49.905217     740 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-921639\" already exists" pod="kube-system/kube-apiserver-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: W1229 07:59:49.989546     740 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/crio-4be9f1f4164cdc5deb7cd9650d79daae3b70af6606aff3c68bac7a70f57cdd9e WatchSource:0}: Error finding container 4be9f1f4164cdc5deb7cd9650d79daae3b70af6606aff3c68bac7a70f57cdd9e: Status 404 returned error can't find the container with id 4be9f1f4164cdc5deb7cd9650d79daae3b70af6606aff3c68bac7a70f57cdd9e
	Dec 29 07:59:52 newest-cni-921639 kubelet[740]: E1229 07:59:52.491119     740 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-921639" containerName="kube-scheduler"
	Dec 29 07:59:52 newest-cni-921639 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:59:52 newest-cni-921639 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:59:52 newest-cni-921639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-921639 -n newest-cni-921639
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-921639 -n newest-cni-921639: exit status 2 (415.073424ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-921639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-t9fsk storage-provisioner dashboard-metrics-scraper-867fb5f87b-swfgp kubernetes-dashboard-b84665fb8-ht5xh
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-921639 describe pod coredns-7d764666f9-t9fsk storage-provisioner dashboard-metrics-scraper-867fb5f87b-swfgp kubernetes-dashboard-b84665fb8-ht5xh
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-921639 describe pod coredns-7d764666f9-t9fsk storage-provisioner dashboard-metrics-scraper-867fb5f87b-swfgp kubernetes-dashboard-b84665fb8-ht5xh: exit status 1 (85.106778ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-t9fsk" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-swfgp" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-ht5xh" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-921639 describe pod coredns-7d764666f9-t9fsk storage-provisioner dashboard-metrics-scraper-867fb5f87b-swfgp kubernetes-dashboard-b84665fb8-ht5xh: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-921639
helpers_test.go:244: (dbg) docker inspect newest-cni-921639:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6",
	        "Created": "2025-12-29T07:59:00.549488049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 953184,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:59:35.242252977Z",
	            "FinishedAt": "2025-12-29T07:59:34.219215095Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/hosts",
	        "LogPath": "/var/lib/docker/containers/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6-json.log",
	        "Name": "/newest-cni-921639",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-921639:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-921639",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6",
	                "LowerDir": "/var/lib/docker/overlay2/9dd7621e42924d787c12dd5a7189cc8c6028b6914bc99cd6333f2ca0ee5cc9ba-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9dd7621e42924d787c12dd5a7189cc8c6028b6914bc99cd6333f2ca0ee5cc9ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9dd7621e42924d787c12dd5a7189cc8c6028b6914bc99cd6333f2ca0ee5cc9ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9dd7621e42924d787c12dd5a7189cc8c6028b6914bc99cd6333f2ca0ee5cc9ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-921639",
	                "Source": "/var/lib/docker/volumes/newest-cni-921639/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-921639",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-921639",
	                "name.minikube.sigs.k8s.io": "newest-cni-921639",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fa8a827c3ec8bf9d12c96672467c1c119b01baf7e8e8405bfacffed5a1122c3f",
	            "SandboxKey": "/var/run/docker/netns/fa8a827c3ec8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33858"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33859"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33862"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33860"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33861"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-921639": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:9a:be:e9:29:79",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0c6a7ca1056e7338b014b856b3d02e72a65fdee86fa30ab64b1ed8a6a45ad6d1",
	                    "EndpointID": "46a952997ec6148cdd2ea0bfa9fe576e6e95dc48b11a8a891f8e7e836d278e13",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-921639",
	                        "829fd18972bc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-921639 -n newest-cni-921639
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-921639 -n newest-cni-921639: exit status 2 (351.752148ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-921639 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-921639 logs -n 25: (1.124125861s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-822299                                                                                                                                                                                                                          │ no-preload-822299            │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:56 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:56 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-664524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │                     │
	│ stop    │ -p embed-certs-664524 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-664524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ start   │ -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:58 UTC │
	│ ssh     │ force-systemd-flag-122604 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-122604    │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ delete  │ -p force-systemd-flag-122604                                                                                                                                                                                                                  │ force-systemd-flag-122604    │ jenkins │ v1.37.0 │ 29 Dec 25 07:57 UTC │ 29 Dec 25 07:57 UTC │
	│ delete  │ -p disable-driver-mounts-062900                                                                                                                                                                                                               │ disable-driver-mounts-062900 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ start   │ -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ image   │ embed-certs-664524 image list --format=json                                                                                                                                                                                                   │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ pause   │ -p embed-certs-664524 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ delete  │ -p embed-certs-664524                                                                                                                                                                                                                         │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ delete  │ -p embed-certs-664524                                                                                                                                                                                                                         │ embed-certs-664524           │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ start   │ -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-358521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-358521 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-358521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ start   │ -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-358521 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-921639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │                     │
	│ stop    │ -p newest-cni-921639 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable dashboard -p newest-cni-921639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ start   │ -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ image   │ newest-cni-921639 image list --format=json                                                                                                                                                                                                    │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ pause   │ -p newest-cni-921639 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-921639            │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:59:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:59:34.867755  953059 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:59:34.867955  953059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:59:34.867984  953059 out.go:374] Setting ErrFile to fd 2...
	I1229 07:59:34.868003  953059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:59:34.868389  953059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:59:34.869019  953059 out.go:368] Setting JSON to false
	I1229 07:59:34.870241  953059 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16927,"bootTime":1766978248,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:59:34.870358  953059 start.go:143] virtualization:  
	I1229 07:59:34.872208  953059 out.go:179] * [newest-cni-921639] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:59:34.873542  953059 notify.go:221] Checking for updates...
	I1229 07:59:34.873987  953059 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:59:34.875295  953059 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:59:34.881954  953059 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:59:34.883067  953059 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:59:34.884310  953059 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:59:34.885396  953059 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:59:34.887691  953059 config.go:182] Loaded profile config "newest-cni-921639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:59:34.888277  953059 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:59:34.925640  953059 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:59:34.925756  953059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:59:35.058251  953059 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:59:35.040632708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:59:35.058378  953059 docker.go:319] overlay module found
	I1229 07:59:35.059745  953059 out.go:179] * Using the docker driver based on existing profile
	I1229 07:59:35.060898  953059 start.go:309] selected driver: docker
	I1229 07:59:35.060915  953059 start.go:928] validating driver "docker" against &{Name:newest-cni-921639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-921639 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:59:35.061012  953059 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:59:35.061782  953059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:59:35.169449  953059 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:59:35.155265822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:59:35.169787  953059 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1229 07:59:35.169805  953059 cni.go:84] Creating CNI manager for ""
	I1229 07:59:35.169865  953059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:59:35.169897  953059 start.go:353] cluster config:
	{Name:newest-cni-921639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-921639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:59:35.171268  953059 out.go:179] * Starting "newest-cni-921639" primary control-plane node in "newest-cni-921639" cluster
	I1229 07:59:35.172378  953059 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:59:35.173563  953059 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:59:35.174674  953059 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:59:35.174717  953059 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:59:35.174725  953059 cache.go:65] Caching tarball of preloaded images
	I1229 07:59:35.174807  953059 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 07:59:35.174817  953059 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 07:59:35.174942  953059 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/config.json ...
	I1229 07:59:35.175155  953059 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:59:35.197754  953059 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:59:35.197779  953059 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:59:35.197794  953059 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:59:35.197831  953059 start.go:360] acquireMachinesLock for newest-cni-921639: {Name:mkc8472acf578a41625391ce0b888363af994d9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:59:35.197891  953059 start.go:364] duration metric: took 37.244µs to acquireMachinesLock for "newest-cni-921639"
	I1229 07:59:35.197915  953059 start.go:96] Skipping create...Using existing machine configuration
	I1229 07:59:35.197924  953059 fix.go:54] fixHost starting: 
	I1229 07:59:35.198220  953059 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:35.215275  953059 fix.go:112] recreateIfNeeded on newest-cni-921639: state=Stopped err=<nil>
	W1229 07:59:35.215307  953059 fix.go:138] unexpected machine state, will restart: <nil>
	I1229 07:59:33.835187  950174 addons.go:530] duration metric: took 9.158837049s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1229 07:59:33.845466  950174 api_server.go:325] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1229 07:59:33.848945  950174 api_server.go:141] control plane version: v1.35.0
	I1229 07:59:33.848982  950174 api_server.go:131] duration metric: took 22.414845ms to wait for apiserver health ...
	I1229 07:59:33.848992  950174 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:59:33.856382  950174 system_pods.go:59] 8 kube-system pods found
	I1229 07:59:33.856428  950174 system_pods.go:61] "coredns-7d764666f9-jv9qg" [d90c5059-2a03-4f81-ae2f-d4b1096b9980] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:59:33.856479  950174 system_pods.go:61] "etcd-default-k8s-diff-port-358521" [0eee75ac-1aee-4ac1-8cfa-bbd734dec788] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:59:33.856495  950174 system_pods.go:61] "kindnet-x9pdb" [eb9ce56b-a80b-48b7-9580-263fdc75601b] Running
	I1229 07:59:33.856504  950174 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-358521" [5cc78ab0-d29f-41cb-9caa-cffd2e0b9e00] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:59:33.856519  950174 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-358521" [4fde9a1a-3130-4858-bf54-aa0a01a5ee12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:59:33.856542  950174 system_pods.go:61] "kube-proxy-58t84" [614cc100-008a-4997-ac58-cc1b29575b11] Running
	I1229 07:59:33.856555  950174 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-358521" [5fa106f8-bdcc-4d88-b4cd-b6ce4678e5cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:59:33.856560  950174 system_pods.go:61] "storage-provisioner" [3a0877c5-d210-4e03-947b-f418cf49454a] Running
	I1229 07:59:33.856578  950174 system_pods.go:74] duration metric: took 7.578906ms to wait for pod list to return data ...
	I1229 07:59:33.856592  950174 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:59:33.865709  950174 default_sa.go:45] found service account: "default"
	I1229 07:59:33.865741  950174 default_sa.go:55] duration metric: took 9.143021ms for default service account to be created ...
	I1229 07:59:33.865752  950174 system_pods.go:116] waiting for k8s-apps to be running ...
	I1229 07:59:33.870461  950174 system_pods.go:86] 8 kube-system pods found
	I1229 07:59:33.870504  950174 system_pods.go:89] "coredns-7d764666f9-jv9qg" [d90c5059-2a03-4f81-ae2f-d4b1096b9980] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1229 07:59:33.870515  950174 system_pods.go:89] "etcd-default-k8s-diff-port-358521" [0eee75ac-1aee-4ac1-8cfa-bbd734dec788] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:59:33.870557  950174 system_pods.go:89] "kindnet-x9pdb" [eb9ce56b-a80b-48b7-9580-263fdc75601b] Running
	I1229 07:59:33.870575  950174 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-358521" [5cc78ab0-d29f-41cb-9caa-cffd2e0b9e00] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:59:33.870584  950174 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-358521" [4fde9a1a-3130-4858-bf54-aa0a01a5ee12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:59:33.870593  950174 system_pods.go:89] "kube-proxy-58t84" [614cc100-008a-4997-ac58-cc1b29575b11] Running
	I1229 07:59:33.870601  950174 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-358521" [5fa106f8-bdcc-4d88-b4cd-b6ce4678e5cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:59:33.870636  950174 system_pods.go:89] "storage-provisioner" [3a0877c5-d210-4e03-947b-f418cf49454a] Running
	I1229 07:59:33.870650  950174 system_pods.go:126] duration metric: took 4.892945ms to wait for k8s-apps to be running ...
	I1229 07:59:33.870659  950174 system_svc.go:44] waiting for kubelet service to be running ....
	I1229 07:59:33.870733  950174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:59:33.899945  950174 system_svc.go:56] duration metric: took 29.275961ms WaitForService to wait for kubelet
	I1229 07:59:33.900017  950174 kubeadm.go:587] duration metric: took 9.224067308s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 07:59:33.900053  950174 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:59:33.910021  950174 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:59:33.910108  950174 node_conditions.go:123] node cpu capacity is 2
	I1229 07:59:33.910141  950174 node_conditions.go:105] duration metric: took 10.063725ms to run NodePressure ...
	I1229 07:59:33.910169  950174 start.go:242] waiting for startup goroutines ...
	I1229 07:59:33.910195  950174 start.go:247] waiting for cluster config update ...
	I1229 07:59:33.910223  950174 start.go:256] writing updated cluster config ...
	I1229 07:59:33.910544  950174 ssh_runner.go:195] Run: rm -f paused
	I1229 07:59:33.917022  950174 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1229 07:59:33.929445  950174 pod_ready.go:83] waiting for pod "coredns-7d764666f9-jv9qg" in "kube-system" namespace to be "Ready" or be gone ...
	I1229 07:59:35.216869  953059 out.go:252] * Restarting existing docker container for "newest-cni-921639" ...
	I1229 07:59:35.216974  953059 cli_runner.go:164] Run: docker start newest-cni-921639
	I1229 07:59:35.485988  953059 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:35.510500  953059 kic.go:430] container "newest-cni-921639" state is running.
	I1229 07:59:35.510886  953059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-921639
	I1229 07:59:35.538208  953059 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/config.json ...
	I1229 07:59:35.538439  953059 machine.go:94] provisionDockerMachine start ...
	I1229 07:59:35.538495  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:35.561860  953059 main.go:144] libmachine: Using SSH client type: native
	I1229 07:59:35.562196  953059 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33858 <nil> <nil>}
	I1229 07:59:35.562204  953059 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:59:35.562792  953059 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57780->127.0.0.1:33858: read: connection reset by peer
	I1229 07:59:38.729307  953059 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-921639
	
	I1229 07:59:38.729331  953059 ubuntu.go:182] provisioning hostname "newest-cni-921639"
	I1229 07:59:38.729407  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:38.753616  953059 main.go:144] libmachine: Using SSH client type: native
	I1229 07:59:38.753934  953059 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33858 <nil> <nil>}
	I1229 07:59:38.753956  953059 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-921639 && echo "newest-cni-921639" | sudo tee /etc/hostname
	I1229 07:59:38.944525  953059 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-921639
	
	I1229 07:59:38.944631  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:38.975589  953059 main.go:144] libmachine: Using SSH client type: native
	I1229 07:59:38.975920  953059 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33858 <nil> <nil>}
	I1229 07:59:38.975947  953059 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-921639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-921639/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-921639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:59:39.141407  953059 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:59:39.141431  953059 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-739997/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-739997/.minikube}
	I1229 07:59:39.141461  953059 ubuntu.go:190] setting up certificates
	I1229 07:59:39.141469  953059 provision.go:84] configureAuth start
	I1229 07:59:39.141531  953059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-921639
	I1229 07:59:39.165031  953059 provision.go:143] copyHostCerts
	I1229 07:59:39.165117  953059 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem, removing ...
	I1229 07:59:39.165135  953059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem
	I1229 07:59:39.165202  953059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/ca.pem (1078 bytes)
	I1229 07:59:39.165290  953059 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem, removing ...
	I1229 07:59:39.165295  953059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem
	I1229 07:59:39.165322  953059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/cert.pem (1123 bytes)
	I1229 07:59:39.165385  953059 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem, removing ...
	I1229 07:59:39.165389  953059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem
	I1229 07:59:39.165413  953059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-739997/.minikube/key.pem (1679 bytes)
	I1229 07:59:39.165473  953059 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem org=jenkins.newest-cni-921639 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-921639]
	I1229 07:59:39.418921  953059 provision.go:177] copyRemoteCerts
	I1229 07:59:39.419023  953059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:59:39.419091  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:39.442933  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:39.548849  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1229 07:59:39.569473  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:59:39.590644  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1229 07:59:39.614078  953059 provision.go:87] duration metric: took 472.587274ms to configureAuth
	I1229 07:59:39.614107  953059 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:59:39.614297  953059 config.go:182] Loaded profile config "newest-cni-921639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:59:39.614405  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:39.631699  953059 main.go:144] libmachine: Using SSH client type: native
	I1229 07:59:39.632021  953059 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33858 <nil> <nil>}
	I1229 07:59:39.632042  953059 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1229 07:59:35.937370  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	W1229 07:59:38.434637  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	I1229 07:59:40.051232  953059 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1229 07:59:40.051256  953059 machine.go:97] duration metric: took 4.512807084s to provisionDockerMachine
	I1229 07:59:40.051268  953059 start.go:293] postStartSetup for "newest-cni-921639" (driver="docker")
	I1229 07:59:40.051279  953059 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:59:40.051355  953059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:59:40.051403  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:40.075514  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:40.185692  953059 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:59:40.189428  953059 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:59:40.189459  953059 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:59:40.189471  953059 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/addons for local assets ...
	I1229 07:59:40.189527  953059 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-739997/.minikube/files for local assets ...
	I1229 07:59:40.189608  953059 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem -> 7418542.pem in /etc/ssl/certs
	I1229 07:59:40.189710  953059 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:59:40.198089  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:59:40.218062  953059 start.go:296] duration metric: took 166.777547ms for postStartSetup
	I1229 07:59:40.218153  953059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:59:40.218198  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:40.236638  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:40.342496  953059 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:59:40.347547  953059 fix.go:56] duration metric: took 5.149614687s for fixHost
	I1229 07:59:40.347575  953059 start.go:83] releasing machines lock for "newest-cni-921639", held for 5.149670885s
	I1229 07:59:40.347644  953059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-921639
	I1229 07:59:40.370797  953059 ssh_runner.go:195] Run: cat /version.json
	I1229 07:59:40.370853  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:40.371051  953059 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:59:40.371110  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:40.410145  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:40.416380  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:40.625464  953059 ssh_runner.go:195] Run: systemctl --version
	I1229 07:59:40.632606  953059 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1229 07:59:40.692116  953059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:59:40.697332  953059 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:59:40.697407  953059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:59:40.706048  953059 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1229 07:59:40.706076  953059 start.go:496] detecting cgroup driver to use...
	I1229 07:59:40.706109  953059 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1229 07:59:40.706164  953059 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:59:40.722466  953059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:59:40.741806  953059 docker.go:218] disabling cri-docker service (if available) ...
	I1229 07:59:40.741870  953059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1229 07:59:40.759130  953059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1229 07:59:40.773403  953059 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1229 07:59:40.905287  953059 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1229 07:59:41.064715  953059 docker.go:234] disabling docker service ...
	I1229 07:59:41.064890  953059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1229 07:59:41.084109  953059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1229 07:59:41.097949  953059 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1229 07:59:41.245592  953059 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1229 07:59:41.392435  953059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:59:41.409773  953059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:59:41.425386  953059 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1229 07:59:41.425449  953059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.443197  953059 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1229 07:59:41.443266  953059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.464457  953059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.483876  953059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.496773  953059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:59:41.508980  953059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.521815  953059 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.534890  953059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1229 07:59:41.548657  953059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:59:41.557543  953059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:59:41.566455  953059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:59:41.748779  953059 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1229 07:59:42.420437  953059 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1229 07:59:42.420510  953059 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1229 07:59:42.425485  953059 start.go:574] Will wait 60s for crictl version
	I1229 07:59:42.425628  953059 ssh_runner.go:195] Run: which crictl
	I1229 07:59:42.433128  953059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:59:42.471529  953059 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I1229 07:59:42.471611  953059 ssh_runner.go:195] Run: crio --version
	I1229 07:59:42.527379  953059 ssh_runner.go:195] Run: crio --version
	I1229 07:59:42.578032  953059 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I1229 07:59:42.581088  953059 cli_runner.go:164] Run: docker network inspect newest-cni-921639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:59:42.606812  953059 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:59:42.615364  953059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:59:42.633970  953059 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1229 07:59:42.637066  953059 kubeadm.go:884] updating cluster {Name:newest-cni-921639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-921639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:59:42.637363  953059 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 07:59:42.637441  953059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:59:42.699353  953059 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:59:42.699420  953059 crio.go:433] Images already preloaded, skipping extraction
	I1229 07:59:42.699506  953059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1229 07:59:42.756864  953059 crio.go:561] all images are preloaded for cri-o runtime.
	I1229 07:59:42.756894  953059 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:59:42.756903  953059 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1229 07:59:42.757003  953059 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-921639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-921639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:59:42.757105  953059 ssh_runner.go:195] Run: crio config
	I1229 07:59:42.899664  953059 cni.go:84] Creating CNI manager for ""
	I1229 07:59:42.899742  953059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:59:42.899775  953059 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1229 07:59:42.899839  953059 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-921639 NodeName:newest-cni-921639 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:59:42.900024  953059 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-921639"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:59:42.900147  953059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:59:42.909217  953059 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:59:42.909346  953059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:59:42.917840  953059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1229 07:59:42.933310  953059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:59:42.949393  953059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1229 07:59:42.963030  953059 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:59:42.967051  953059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:59:42.976655  953059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:59:43.183900  953059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:59:43.214199  953059 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639 for IP: 192.168.85.2
	I1229 07:59:43.214269  953059 certs.go:195] generating shared ca certs ...
	I1229 07:59:43.214300  953059 certs.go:227] acquiring lock for ca certs: {Name:mkb9bc7bf95bb7ad38ab0701b217d8eb14a80d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:43.214488  953059 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key
	I1229 07:59:43.214569  953059 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key
	I1229 07:59:43.214608  953059 certs.go:257] generating profile certs ...
	I1229 07:59:43.214742  953059 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/client.key
	I1229 07:59:43.214871  953059 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/apiserver.key.6bbd8bf7
	I1229 07:59:43.214961  953059 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/proxy-client.key
	I1229 07:59:43.215130  953059 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem (1338 bytes)
	W1229 07:59:43.215193  953059 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854_empty.pem, impossibly tiny 0 bytes
	I1229 07:59:43.215220  953059 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca-key.pem (1679 bytes)
	I1229 07:59:43.215284  953059 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem (1078 bytes)
	I1229 07:59:43.215345  953059 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:59:43.215409  953059 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/certs/key.pem (1679 bytes)
	I1229 07:59:43.215506  953059 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem (1708 bytes)
	I1229 07:59:43.216119  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:59:43.301078  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:59:43.355692  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:59:43.383398  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1229 07:59:43.428506  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1229 07:59:43.473682  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1229 07:59:43.531671  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:59:43.581101  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/newest-cni-921639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:59:43.638253  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/ssl/certs/7418542.pem --> /usr/share/ca-certificates/7418542.pem (1708 bytes)
	I1229 07:59:43.680960  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:59:43.716203  953059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-739997/.minikube/certs/741854.pem --> /usr/share/ca-certificates/741854.pem (1338 bytes)
	I1229 07:59:43.742745  953059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:59:43.761185  953059 ssh_runner.go:195] Run: openssl version
	I1229 07:59:43.771799  953059 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7418542.pem
	I1229 07:59:43.781443  953059 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7418542.pem /etc/ssl/certs/7418542.pem
	I1229 07:59:43.797517  953059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7418542.pem
	I1229 07:59:43.802290  953059 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 07:14 /usr/share/ca-certificates/7418542.pem
	I1229 07:59:43.802353  953059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7418542.pem
	I1229 07:59:43.854614  953059 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:59:43.866403  953059 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:59:43.875432  953059 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:59:43.884164  953059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:59:43.888663  953059 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 07:10 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:59:43.888728  953059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:59:43.938940  953059 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:59:43.947777  953059 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/741854.pem
	I1229 07:59:43.956401  953059 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/741854.pem /etc/ssl/certs/741854.pem
	I1229 07:59:43.968332  953059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741854.pem
	I1229 07:59:43.974049  953059 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 07:14 /usr/share/ca-certificates/741854.pem
	I1229 07:59:43.974127  953059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741854.pem
	I1229 07:59:44.031702  953059 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:59:44.045819  953059 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:59:44.049842  953059 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1229 07:59:44.099222  953059 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1229 07:59:44.212163  953059 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1229 07:59:44.387583  953059 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1229 07:59:44.531109  953059 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1229 07:59:44.626367  953059 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1229 07:59:44.722261  953059 kubeadm.go:401] StartCluster: {Name:newest-cni-921639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-921639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:59:44.722365  953059 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1229 07:59:44.722452  953059 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1229 07:59:44.783995  953059 cri.go:96] found id: "b478f4fb8f2f1eb02903b42ebfed7b093318e85195cad68adf0b4498eb30c441"
	I1229 07:59:44.784019  953059 cri.go:96] found id: "9a7655a59d3f525f641f92d24707bbb399020aacfccb7c1c32b929dd55619786"
	I1229 07:59:44.784023  953059 cri.go:96] found id: "3bea69a09373dddc5016d9d93b0bb2f3ed935c368464d98c65203289bfd93e0e"
	I1229 07:59:44.784027  953059 cri.go:96] found id: "751efea4e17c81bf50e9481d7316fc9841da1e69827a9a6545c3f9caeff8885c"
	I1229 07:59:44.784031  953059 cri.go:96] found id: ""
	I1229 07:59:44.784087  953059 ssh_runner.go:195] Run: sudo runc list -f json
	W1229 07:59:44.809096  953059 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:59:44Z" level=error msg="open /run/runc: no such file or directory"
	I1229 07:59:44.809167  953059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:59:44.829218  953059 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1229 07:59:44.829239  953059 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1229 07:59:44.829305  953059 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1229 07:59:44.847192  953059 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1229 07:59:44.847766  953059 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-921639" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:59:44.848023  953059 kubeconfig.go:62] /home/jenkins/minikube-integration/22353-739997/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-921639" cluster setting kubeconfig missing "newest-cni-921639" context setting]
	I1229 07:59:44.848451  953059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:44.849838  953059 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1229 07:59:44.885596  953059 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1229 07:59:44.885688  953059 kubeadm.go:602] duration metric: took 56.442287ms to restartPrimaryControlPlane
	I1229 07:59:44.885714  953059 kubeadm.go:403] duration metric: took 163.461774ms to StartCluster
	I1229 07:59:44.885761  953059 settings.go:142] acquiring lock: {Name:mk8b5d8d422a3d475b8adf5a3403363f6345f40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:44.885870  953059 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:59:44.886815  953059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/kubeconfig: {Name:mk02d8347a8ad4d0ed179ec8a2082fee92695ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:59:44.887293  953059 config.go:182] Loaded profile config "newest-cni-921639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:59:44.887378  953059 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 07:59:44.887424  953059 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1229 07:59:44.887646  953059 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-921639"
	I1229 07:59:44.887677  953059 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-921639"
	W1229 07:59:44.887698  953059 addons.go:248] addon storage-provisioner should already be in state true
	I1229 07:59:44.887752  953059 host.go:66] Checking if "newest-cni-921639" exists ...
	I1229 07:59:44.888273  953059 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:44.888737  953059 addons.go:70] Setting dashboard=true in profile "newest-cni-921639"
	I1229 07:59:44.888767  953059 addons.go:239] Setting addon dashboard=true in "newest-cni-921639"
	W1229 07:59:44.888775  953059 addons.go:248] addon dashboard should already be in state true
	I1229 07:59:44.888817  953059 host.go:66] Checking if "newest-cni-921639" exists ...
	I1229 07:59:44.889285  953059 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:44.893606  953059 out.go:179] * Verifying Kubernetes components...
	I1229 07:59:44.893609  953059 addons.go:70] Setting default-storageclass=true in profile "newest-cni-921639"
	I1229 07:59:44.893712  953059 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-921639"
	I1229 07:59:44.894050  953059 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:44.897938  953059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:59:44.943646  953059 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1229 07:59:44.945966  953059 addons.go:239] Setting addon default-storageclass=true in "newest-cni-921639"
	W1229 07:59:44.945984  953059 addons.go:248] addon default-storageclass should already be in state true
	I1229 07:59:44.946007  953059 host.go:66] Checking if "newest-cni-921639" exists ...
	I1229 07:59:44.946424  953059 cli_runner.go:164] Run: docker container inspect newest-cni-921639 --format={{.State.Status}}
	I1229 07:59:44.948900  953059 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:59:44.948919  953059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1229 07:59:44.948979  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:44.964824  953059 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1229 07:59:44.967873  953059 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1229 07:59:40.439714  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	W1229 07:59:42.448753  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	W1229 07:59:44.944014  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	I1229 07:59:44.975297  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1229 07:59:44.975324  953059 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1229 07:59:44.975390  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:45.002710  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:45.002890  953059 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1229 07:59:45.002907  953059 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1229 07:59:45.002975  953059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-921639
	I1229 07:59:45.020957  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:45.057801  953059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33858 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/newest-cni-921639/id_rsa Username:docker}
	I1229 07:59:45.414297  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1229 07:59:45.414323  953059 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1229 07:59:45.500447  953059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1229 07:59:45.512346  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1229 07:59:45.512374  953059 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1229 07:59:45.540875  953059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1229 07:59:45.545947  953059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:59:45.589435  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1229 07:59:45.589464  953059 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1229 07:59:45.720433  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1229 07:59:45.720496  953059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1229 07:59:45.806499  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1229 07:59:45.806525  953059 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1229 07:59:45.916443  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1229 07:59:45.916480  953059 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1229 07:59:45.963923  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1229 07:59:45.963954  953059 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1229 07:59:45.985889  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1229 07:59:45.985911  953059 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1229 07:59:46.012580  953059 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:59:46.012622  953059 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1229 07:59:46.039169  953059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1229 07:59:49.790654  953059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.290160687s)
	W1229 07:59:47.435675  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	W1229 07:59:49.935203  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	I1229 07:59:50.855308  953059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.314396505s)
	I1229 07:59:50.855388  953059 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.30934721s)
	I1229 07:59:50.855432  953059 api_server.go:52] waiting for apiserver process to appear ...
	I1229 07:59:50.855496  953059 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:59:50.888258  953059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.849033974s)
	I1229 07:59:50.891363  953059 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-921639 addons enable metrics-server
	
	I1229 07:59:50.893019  953059 api_server.go:72] duration metric: took 6.005521162s to wait for apiserver process to appear ...
	I1229 07:59:50.893100  953059 api_server.go:88] waiting for apiserver healthz status ...
	I1229 07:59:50.893152  953059 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:59:50.899438  953059 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1229 07:59:50.902478  953059 addons.go:530] duration metric: took 6.015044314s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1229 07:59:50.905509  953059 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1229 07:59:50.905578  953059 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1229 07:59:51.394183  953059 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1229 07:59:51.402386  953059 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1229 07:59:51.403594  953059 api_server.go:141] control plane version: v1.35.0
	I1229 07:59:51.403621  953059 api_server.go:131] duration metric: took 510.481935ms to wait for apiserver health ...
	I1229 07:59:51.403631  953059 system_pods.go:43] waiting for kube-system pods to appear ...
	I1229 07:59:51.407310  953059 system_pods.go:59] 8 kube-system pods found
	I1229 07:59:51.407357  953059 system_pods.go:61] "coredns-7d764666f9-t9fsk" [a6a64d27-c39d-4289-ad6c-240e23b61c7c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1229 07:59:51.407368  953059 system_pods.go:61] "etcd-newest-cni-921639" [f5f342fa-4ca0-479a-aa4d-2cdc7f5d0a2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1229 07:59:51.407377  953059 system_pods.go:61] "kindnet-lbs8p" [1e342362-a751-4336-93d1-a1bdb3c1f16a] Running
	I1229 07:59:51.407385  953059 system_pods.go:61] "kube-apiserver-newest-cni-921639" [dd73ba12-116f-4275-9d77-4e5f5f761657] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1229 07:59:51.407398  953059 system_pods.go:61] "kube-controller-manager-newest-cni-921639" [de7abdd9-1e08-4a9f-a47d-78f714e98094] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1229 07:59:51.407405  953059 system_pods.go:61] "kube-proxy-z62qd" [bb67aae3-6fe1-4c26-b760-5b1ba067fd96] Running
	I1229 07:59:51.407415  953059 system_pods.go:61] "kube-scheduler-newest-cni-921639" [9bcb6bda-4afb-4736-873f-90a2fdc42008] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1229 07:59:51.407427  953059 system_pods.go:61] "storage-provisioner" [2be6bd44-f3fd-4b78-9d33-27b7c3452b52] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1229 07:59:51.407438  953059 system_pods.go:74] duration metric: took 3.800514ms to wait for pod list to return data ...
	I1229 07:59:51.407446  953059 default_sa.go:34] waiting for default service account to be created ...
	I1229 07:59:51.409832  953059 default_sa.go:45] found service account: "default"
	I1229 07:59:51.409852  953059 default_sa.go:55] duration metric: took 2.400347ms for default service account to be created ...
	I1229 07:59:51.409867  953059 kubeadm.go:587] duration metric: took 6.522370155s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1229 07:59:51.409883  953059 node_conditions.go:102] verifying NodePressure condition ...
	I1229 07:59:51.412293  953059 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1229 07:59:51.412317  953059 node_conditions.go:123] node cpu capacity is 2
	I1229 07:59:51.412330  953059 node_conditions.go:105] duration metric: took 2.441423ms to run NodePressure ...
	I1229 07:59:51.412342  953059 start.go:242] waiting for startup goroutines ...
	I1229 07:59:51.412349  953059 start.go:247] waiting for cluster config update ...
	I1229 07:59:51.412360  953059 start.go:256] writing updated cluster config ...
	I1229 07:59:51.412652  953059 ssh_runner.go:195] Run: rm -f paused
	I1229 07:59:51.479035  953059 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1229 07:59:51.482606  953059 out.go:203] 
	W1229 07:59:51.486673  953059 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1229 07:59:51.490759  953059 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1229 07:59:51.493679  953059 out.go:179] * Done! kubectl is now configured to use "newest-cni-921639" cluster and "default" namespace by default
	W1229 07:59:51.935617  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	W1229 07:59:53.936301  950174 pod_ready.go:104] pod "coredns-7d764666f9-jv9qg" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.941587477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.950097657Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=10f7456e-cd6a-46e1-95ca-45e560a5fb77 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.959208834Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-z62qd/POD" id=92308c2e-9182-4c12-9209-6bfa21e0b00f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.959355329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.96375529Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=92308c2e-9182-4c12-9209-6bfa21e0b00f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.980212288Z" level=info msg="Ran pod sandbox 9be75efe6ce7ee16e1c43d96eddce80c944e3e3fee14addcd3d0c1634e24f1e2 with infra container: kube-system/kindnet-lbs8p/POD" id=10f7456e-cd6a-46e1-95ca-45e560a5fb77 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.98575433Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=001e0e31-2360-4e8c-a444-44de878fee96 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.987168363Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=6b8ebebe-2df4-4140-9741-442f235b0bee name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.991713613Z" level=info msg="Creating container: kube-system/kindnet-lbs8p/kindnet-cni" id=3fd13d6d-fb65-474f-bb4b-db44520a5892 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.991911203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:49 newest-cni-921639 crio[619]: time="2025-12-29T07:59:49.996569578Z" level=info msg="Ran pod sandbox 4be9f1f4164cdc5deb7cd9650d79daae3b70af6606aff3c68bac7a70f57cdd9e with infra container: kube-system/kube-proxy-z62qd/POD" id=92308c2e-9182-4c12-9209-6bfa21e0b00f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.00310265Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=e7e447c2-4c3b-4bc8-97c2-d2b2929074f6 name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.006187507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.008889691Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=57f2f8bc-3f02-4886-9b62-c50ff44a4ffb name=/runtime.v1.ImageService/ImageStatus
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.010466581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.013251111Z" level=info msg="Creating container: kube-system/kube-proxy-z62qd/kube-proxy" id=f5b299f8-fadd-4fad-bf66-bdc9d59ba4a5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.013598618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.03493316Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.038373868Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.060666941Z" level=info msg="Created container 956b518531f5e54ab1ef0f9da1ad3ef9a2e1134881091fb152908303d50bc669: kube-system/kindnet-lbs8p/kindnet-cni" id=3fd13d6d-fb65-474f-bb4b-db44520a5892 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.061675113Z" level=info msg="Starting container: 956b518531f5e54ab1ef0f9da1ad3ef9a2e1134881091fb152908303d50bc669" id=0344d078-06cc-4f7f-96cb-52d2bdc8bdb3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.064122878Z" level=info msg="Started container" PID=1072 containerID=956b518531f5e54ab1ef0f9da1ad3ef9a2e1134881091fb152908303d50bc669 description=kube-system/kindnet-lbs8p/kindnet-cni id=0344d078-06cc-4f7f-96cb-52d2bdc8bdb3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9be75efe6ce7ee16e1c43d96eddce80c944e3e3fee14addcd3d0c1634e24f1e2
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.111939729Z" level=info msg="Created container eb6b1f8340a2793c431a556be006c047660f2b1a13a4a368bc5de8e94e86e805: kube-system/kube-proxy-z62qd/kube-proxy" id=f5b299f8-fadd-4fad-bf66-bdc9d59ba4a5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.112741385Z" level=info msg="Starting container: eb6b1f8340a2793c431a556be006c047660f2b1a13a4a368bc5de8e94e86e805" id=da789ac1-6f20-4901-b756-66fa699a4a72 name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 07:59:50 newest-cni-921639 crio[619]: time="2025-12-29T07:59:50.121714509Z" level=info msg="Started container" PID=1079 containerID=eb6b1f8340a2793c431a556be006c047660f2b1a13a4a368bc5de8e94e86e805 description=kube-system/kube-proxy-z62qd/kube-proxy id=da789ac1-6f20-4901-b756-66fa699a4a72 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4be9f1f4164cdc5deb7cd9650d79daae3b70af6606aff3c68bac7a70f57cdd9e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	eb6b1f8340a27       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   7 seconds ago       Running             kube-proxy                1                   4be9f1f4164cd       kube-proxy-z62qd                            kube-system
	956b518531f5e       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   7 seconds ago       Running             kindnet-cni               1                   9be75efe6ce7e       kindnet-lbs8p                               kube-system
	b478f4fb8f2f1       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   13 seconds ago      Running             kube-apiserver            1                   30ae2a218a116       kube-apiserver-newest-cni-921639            kube-system
	9a7655a59d3f5       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   13 seconds ago      Running             kube-scheduler            1                   c886bfd5ad2e3       kube-scheduler-newest-cni-921639            kube-system
	3bea69a09373d       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   13 seconds ago      Running             etcd                      1                   7678483af5366       etcd-newest-cni-921639                      kube-system
	751efea4e17c8       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   13 seconds ago      Running             kube-controller-manager   1                   79bb0ff7e7bc1       kube-controller-manager-newest-cni-921639   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-921639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-921639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=newest-cni-921639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_59_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:59:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-921639
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 07:59:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 07:59:49 +0000   Mon, 29 Dec 2025 07:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 07:59:49 +0000   Mon, 29 Dec 2025 07:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 07:59:49 +0000   Mon, 29 Dec 2025 07:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 29 Dec 2025 07:59:49 +0000   Mon, 29 Dec 2025 07:59:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-921639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                9187b359-79d8-4272-9631-ead43f58bc0f
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-921639                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-lbs8p                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-newest-cni-921639             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-newest-cni-921639    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-z62qd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-newest-cni-921639             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  32s   node-controller  Node newest-cni-921639 event: Registered Node newest-cni-921639 in Controller
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-921639 event: Registered Node newest-cni-921639 in Controller
	
	
	==> dmesg <==
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	[Dec29 07:54] overlayfs: idmapped layers are currently not supported
	[Dec29 07:55] overlayfs: idmapped layers are currently not supported
	[Dec29 07:56] overlayfs: idmapped layers are currently not supported
	[Dec29 07:57] overlayfs: idmapped layers are currently not supported
	[Dec29 07:58] overlayfs: idmapped layers are currently not supported
	[Dec29 07:59] overlayfs: idmapped layers are currently not supported
	[ +10.773618] overlayfs: idmapped layers are currently not supported
	[ +19.528466] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3bea69a09373dddc5016d9d93b0bb2f3ed935c368464d98c65203289bfd93e0e] <==
	{"level":"info","ts":"2025-12-29T07:59:44.789424Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-29T07:59:44.789486Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:59:44.789541Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:59:44.798888Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:59:44.798926Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-29T07:59:44.822032Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-29T07:59:44.822089Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-29T07:59:45.599425Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:45.599470Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:45.599530Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:45.599543Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:59:45.599564Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:45.604883Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:45.604930Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:59:45.604951Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:45.604961Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:45.621254Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-921639 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-29T07:59:45.621295Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:59:45.621587Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:59:45.622693Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:59:45.633528Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:59:45.635124Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:59:45.666277Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:59:45.697949Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:59:45.705698Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 07:59:57 up  4:42,  0 user,  load average: 4.79, 2.81, 2.50
	Linux newest-cni-921639 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [956b518531f5e54ab1ef0f9da1ad3ef9a2e1134881091fb152908303d50bc669] <==
	I1229 07:59:50.176976       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:59:50.177223       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1229 07:59:50.178506       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:59:50.178521       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:59:50.178533       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:59:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:59:50.455278       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:59:50.455382       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:59:50.455421       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:59:50.455584       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [b478f4fb8f2f1eb02903b42ebfed7b093318e85195cad68adf0b4498eb30c441] <==
	I1229 07:59:49.508530       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1229 07:59:49.508754       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:59:49.539951       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1229 07:59:49.540048       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1229 07:59:49.540193       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:49.540457       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:49.540516       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:49.540609       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:59:49.552202       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:59:49.686277       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:49.686298       1 policy_source.go:248] refreshing policies
	E1229 07:59:49.687103       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:59:49.760276       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1229 07:59:49.806461       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:59:50.093025       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:59:50.466117       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:59:50.581289       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:59:50.647164       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:59:50.673656       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:59:50.857193       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.0.215"}
	I1229 07:59:50.881282       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.99.89"}
	I1229 07:59:52.630341       1 controller.go:667] quota admission added evaluator for: endpoints
	I1229 07:59:52.927692       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:59:52.979046       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:59:53.133546       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [751efea4e17c81bf50e9481d7316fc9841da1e69827a9a6545c3f9caeff8885c] <==
	I1229 07:59:52.550103       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1229 07:59:52.550203       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-921639"
	I1229 07:59:52.550328       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1229 07:59:52.549773       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.551707       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.554276       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.557716       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.557760       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.557900       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.557921       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.557939       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.558039       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569016       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569187       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569229       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569273       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569337       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569726       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569807       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.569819       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:59:52.569824       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:59:52.569901       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.576487       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.576701       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:52.613273       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [eb6b1f8340a2793c431a556be006c047660f2b1a13a4a368bc5de8e94e86e805] <==
	I1229 07:59:50.528652       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:59:50.738847       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:59:50.889219       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:50.889260       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1229 07:59:50.889342       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:59:50.956521       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:59:50.957011       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:59:50.967102       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:59:50.967511       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:59:50.967583       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:59:50.972407       1 config.go:200] "Starting service config controller"
	I1229 07:59:50.972492       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:59:50.972536       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:59:50.972568       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:59:50.972610       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:59:50.972641       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:59:50.972840       1 config.go:309] "Starting node config controller"
	I1229 07:59:50.972861       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:59:51.072978       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:59:51.073023       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1229 07:59:51.073070       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:59:51.073338       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [9a7655a59d3f525f641f92d24707bbb399020aacfccb7c1c32b929dd55619786] <==
	I1229 07:59:46.574537       1 serving.go:386] Generated self-signed cert in-memory
	W1229 07:59:49.249296       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1229 07:59:49.249325       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1229 07:59:49.249335       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1229 07:59:49.249352       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1229 07:59:49.388224       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1229 07:59:49.388255       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:59:49.402774       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:59:49.415600       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:59:49.417343       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:59:49.417521       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1229 07:59:49.720287       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: E1229 07:59:49.634447     740 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-921639" containerName="etcd"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: E1229 07:59:49.634698     740 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-921639" containerName="kube-controller-manager"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.720199     740 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.770199     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb67aae3-6fe1-4c26-b760-5b1ba067fd96-lib-modules\") pod \"kube-proxy-z62qd\" (UID: \"bb67aae3-6fe1-4c26-b760-5b1ba067fd96\") " pod="kube-system/kube-proxy-z62qd"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.770262     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e342362-a751-4336-93d1-a1bdb3c1f16a-lib-modules\") pod \"kindnet-lbs8p\" (UID: \"1e342362-a751-4336-93d1-a1bdb3c1f16a\") " pod="kube-system/kindnet-lbs8p"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.770297     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb67aae3-6fe1-4c26-b760-5b1ba067fd96-xtables-lock\") pod \"kube-proxy-z62qd\" (UID: \"bb67aae3-6fe1-4c26-b760-5b1ba067fd96\") " pod="kube-system/kube-proxy-z62qd"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.770330     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1e342362-a751-4336-93d1-a1bdb3c1f16a-cni-cfg\") pod \"kindnet-lbs8p\" (UID: \"1e342362-a751-4336-93d1-a1bdb3c1f16a\") " pod="kube-system/kindnet-lbs8p"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.770350     740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e342362-a751-4336-93d1-a1bdb3c1f16a-xtables-lock\") pod \"kindnet-lbs8p\" (UID: \"1e342362-a751-4336-93d1-a1bdb3c1f16a\") " pod="kube-system/kindnet-lbs8p"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.825084     740 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.838793     740 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.838876     740 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.838905     740 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.840658     740 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: E1229 07:59:49.842993     740 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-921639\" already exists" pod="kube-system/kube-controller-manager-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.843031     740 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: E1229 07:59:49.865911     740 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-921639\" already exists" pod="kube-system/kube-scheduler-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.865948     740 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: E1229 07:59:49.887724     740 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-921639\" already exists" pod="kube-system/etcd-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: I1229 07:59:49.887760     740 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: E1229 07:59:49.905217     740 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-921639\" already exists" pod="kube-system/kube-apiserver-newest-cni-921639"
	Dec 29 07:59:49 newest-cni-921639 kubelet[740]: W1229 07:59:49.989546     740 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/829fd18972bc745d8fc58ff2b4f5c178bdd4618839245da931bffad3f9fa14c6/crio-4be9f1f4164cdc5deb7cd9650d79daae3b70af6606aff3c68bac7a70f57cdd9e WatchSource:0}: Error finding container 4be9f1f4164cdc5deb7cd9650d79daae3b70af6606aff3c68bac7a70f57cdd9e: Status 404 returned error can't find the container with id 4be9f1f4164cdc5deb7cd9650d79daae3b70af6606aff3c68bac7a70f57cdd9e
	Dec 29 07:59:52 newest-cni-921639 kubelet[740]: E1229 07:59:52.491119     740 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-921639" containerName="kube-scheduler"
	Dec 29 07:59:52 newest-cni-921639 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 07:59:52 newest-cni-921639 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 07:59:52 newest-cni-921639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-921639 -n newest-cni-921639
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-921639 -n newest-cni-921639: exit status 2 (380.004696ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-921639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-t9fsk storage-provisioner dashboard-metrics-scraper-867fb5f87b-swfgp kubernetes-dashboard-b84665fb8-ht5xh
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-921639 describe pod coredns-7d764666f9-t9fsk storage-provisioner dashboard-metrics-scraper-867fb5f87b-swfgp kubernetes-dashboard-b84665fb8-ht5xh
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-921639 describe pod coredns-7d764666f9-t9fsk storage-provisioner dashboard-metrics-scraper-867fb5f87b-swfgp kubernetes-dashboard-b84665fb8-ht5xh: exit status 1 (93.005663ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-t9fsk" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-swfgp" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-ht5xh" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-921639 describe pod coredns-7d764666f9-t9fsk storage-provisioner dashboard-metrics-scraper-867fb5f87b-swfgp kubernetes-dashboard-b84665fb8-ht5xh: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-358521 --alsologtostderr -v=1
E1229 08:00:24.559961  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-358521 --alsologtostderr -v=1: exit status 80 (2.516297744s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-358521 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 08:00:22.313074  957318 out.go:360] Setting OutFile to fd 1 ...
	I1229 08:00:22.313168  957318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 08:00:22.313174  957318 out.go:374] Setting ErrFile to fd 2...
	I1229 08:00:22.313179  957318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 08:00:22.313445  957318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 08:00:22.313697  957318 out.go:368] Setting JSON to false
	I1229 08:00:22.313713  957318 mustload.go:66] Loading cluster: default-k8s-diff-port-358521
	I1229 08:00:22.314108  957318 config.go:182] Loaded profile config "default-k8s-diff-port-358521": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 08:00:22.314568  957318 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-358521 --format={{.State.Status}}
	I1229 08:00:22.333183  957318 host.go:66] Checking if "default-k8s-diff-port-358521" exists ...
	I1229 08:00:22.333502  957318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 08:00:22.441421  957318 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-29 08:00:22.429697527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 08:00:22.442416  957318 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766979747-22353/minikube-v1.37.0-1766979747-22353-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766979747-22353-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-358521 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1229 08:00:22.454292  957318 out.go:179] * Pausing node default-k8s-diff-port-358521 ... 
	I1229 08:00:22.466510  957318 host.go:66] Checking if "default-k8s-diff-port-358521" exists ...
	I1229 08:00:22.466873  957318 ssh_runner.go:195] Run: systemctl --version
	I1229 08:00:22.466917  957318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-358521
	I1229 08:00:22.494274  957318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/default-k8s-diff-port-358521/id_rsa Username:docker}
	I1229 08:00:22.634592  957318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 08:00:22.665786  957318 pause.go:52] kubelet running: true
	I1229 08:00:22.665861  957318 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 08:00:23.012530  957318 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 08:00:23.012630  957318 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 08:00:23.101927  957318 cri.go:96] found id: "25b925b3c5f3260a3fc0825cdbdf6e7508f598523f6dbf7f1f6d2c4ba07d5bf3"
	I1229 08:00:23.101947  957318 cri.go:96] found id: "fbaf43378e43520166b5d58421eceaf0fd7e3220c28778395aa7956fecd5ec61"
	I1229 08:00:23.101953  957318 cri.go:96] found id: "96d0aec28d96721f81b355efebc44fd6738930d8643fd08085fbd14de6def54f"
	I1229 08:00:23.101956  957318 cri.go:96] found id: "62f3c1daec0de7e6847351cf93d95f1216d5c1fe55f0553718f8ff8c6c10db3c"
	I1229 08:00:23.101960  957318 cri.go:96] found id: "72fa2c332e12bff08025a682a41e88670c82aabcb4025ed422bad209da0f1393"
	I1229 08:00:23.101964  957318 cri.go:96] found id: "a4f88aac9f2dbaaf6365f7c9256e299eb2b16499b16a628127e27a0b5b3fc4bb"
	I1229 08:00:23.101968  957318 cri.go:96] found id: "11aa31a91a49f940abda8a1b7073c7d96f30401c3f0cdb07575481baf5debcf6"
	I1229 08:00:23.101971  957318 cri.go:96] found id: "8e26f46b068e60b4d589559c907f515b0d55f5afe2ad46b7724291d57e19c3b0"
	I1229 08:00:23.101974  957318 cri.go:96] found id: "6c80f6b65c94ccf99cdc68674c3619fa3cbd4d5629edd42feaebf1f5d3c39436"
	I1229 08:00:23.101989  957318 cri.go:96] found id: "014a6fbd40613e6687d1b0ae5e329715fa384da69d3ac905f2bf2c519346ad18"
	I1229 08:00:23.101992  957318 cri.go:96] found id: "c049649404149843d7066ee08650f04dc2f03a7a6462d981d4bd64715c8d238b"
	I1229 08:00:23.101995  957318 cri.go:96] found id: ""
	I1229 08:00:23.102048  957318 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 08:00:23.131005  957318 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T08:00:23Z" level=error msg="open /run/runc: no such file or directory"
	I1229 08:00:23.469592  957318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 08:00:23.494215  957318 pause.go:52] kubelet running: false
	I1229 08:00:23.494350  957318 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 08:00:23.727696  957318 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 08:00:23.727826  957318 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 08:00:23.815198  957318 cri.go:96] found id: "25b925b3c5f3260a3fc0825cdbdf6e7508f598523f6dbf7f1f6d2c4ba07d5bf3"
	I1229 08:00:23.815273  957318 cri.go:96] found id: "fbaf43378e43520166b5d58421eceaf0fd7e3220c28778395aa7956fecd5ec61"
	I1229 08:00:23.815293  957318 cri.go:96] found id: "96d0aec28d96721f81b355efebc44fd6738930d8643fd08085fbd14de6def54f"
	I1229 08:00:23.815314  957318 cri.go:96] found id: "62f3c1daec0de7e6847351cf93d95f1216d5c1fe55f0553718f8ff8c6c10db3c"
	I1229 08:00:23.815348  957318 cri.go:96] found id: "72fa2c332e12bff08025a682a41e88670c82aabcb4025ed422bad209da0f1393"
	I1229 08:00:23.815374  957318 cri.go:96] found id: "a4f88aac9f2dbaaf6365f7c9256e299eb2b16499b16a628127e27a0b5b3fc4bb"
	I1229 08:00:23.815392  957318 cri.go:96] found id: "11aa31a91a49f940abda8a1b7073c7d96f30401c3f0cdb07575481baf5debcf6"
	I1229 08:00:23.815410  957318 cri.go:96] found id: "8e26f46b068e60b4d589559c907f515b0d55f5afe2ad46b7724291d57e19c3b0"
	I1229 08:00:23.815422  957318 cri.go:96] found id: "6c80f6b65c94ccf99cdc68674c3619fa3cbd4d5629edd42feaebf1f5d3c39436"
	I1229 08:00:23.815429  957318 cri.go:96] found id: "014a6fbd40613e6687d1b0ae5e329715fa384da69d3ac905f2bf2c519346ad18"
	I1229 08:00:23.815432  957318 cri.go:96] found id: "c049649404149843d7066ee08650f04dc2f03a7a6462d981d4bd64715c8d238b"
	I1229 08:00:23.815436  957318 cri.go:96] found id: ""
	I1229 08:00:23.815486  957318 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 08:00:24.221287  957318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 08:00:24.234935  957318 pause.go:52] kubelet running: false
	I1229 08:00:24.235001  957318 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1229 08:00:24.516634  957318 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1229 08:00:24.516714  957318 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1229 08:00:24.710566  957318 cri.go:96] found id: "25b925b3c5f3260a3fc0825cdbdf6e7508f598523f6dbf7f1f6d2c4ba07d5bf3"
	I1229 08:00:24.710587  957318 cri.go:96] found id: "fbaf43378e43520166b5d58421eceaf0fd7e3220c28778395aa7956fecd5ec61"
	I1229 08:00:24.710593  957318 cri.go:96] found id: "96d0aec28d96721f81b355efebc44fd6738930d8643fd08085fbd14de6def54f"
	I1229 08:00:24.710597  957318 cri.go:96] found id: "62f3c1daec0de7e6847351cf93d95f1216d5c1fe55f0553718f8ff8c6c10db3c"
	I1229 08:00:24.710600  957318 cri.go:96] found id: "72fa2c332e12bff08025a682a41e88670c82aabcb4025ed422bad209da0f1393"
	I1229 08:00:24.710604  957318 cri.go:96] found id: "a4f88aac9f2dbaaf6365f7c9256e299eb2b16499b16a628127e27a0b5b3fc4bb"
	I1229 08:00:24.710607  957318 cri.go:96] found id: "11aa31a91a49f940abda8a1b7073c7d96f30401c3f0cdb07575481baf5debcf6"
	I1229 08:00:24.710611  957318 cri.go:96] found id: "8e26f46b068e60b4d589559c907f515b0d55f5afe2ad46b7724291d57e19c3b0"
	I1229 08:00:24.710614  957318 cri.go:96] found id: "6c80f6b65c94ccf99cdc68674c3619fa3cbd4d5629edd42feaebf1f5d3c39436"
	I1229 08:00:24.710620  957318 cri.go:96] found id: "014a6fbd40613e6687d1b0ae5e329715fa384da69d3ac905f2bf2c519346ad18"
	I1229 08:00:24.710624  957318 cri.go:96] found id: "c049649404149843d7066ee08650f04dc2f03a7a6462d981d4bd64715c8d238b"
	I1229 08:00:24.710627  957318 cri.go:96] found id: ""
	I1229 08:00:24.710684  957318 ssh_runner.go:195] Run: sudo runc list -f json
	I1229 08:00:24.732213  957318 out.go:203] 
	W1229 08:00:24.735867  957318 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T08:00:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T08:00:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1229 08:00:24.735890  957318 out.go:285] * 
	* 
	W1229 08:00:24.745028  957318 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 08:00:24.748286  957318 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-358521 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-358521
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-358521:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2",
	        "Created": "2025-12-29T07:58:06.602342108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 950300,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:59:15.815569438Z",
	            "FinishedAt": "2025-12-29T07:59:14.53887049Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/hosts",
	        "LogPath": "/var/lib/docker/containers/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2-json.log",
	        "Name": "/default-k8s-diff-port-358521",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-358521:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-358521",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2",
	                "LowerDir": "/var/lib/docker/overlay2/294ef4bdf8bbaacd1c175276c7947fe59f88159ed7daca907c8e939607d8433f-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/294ef4bdf8bbaacd1c175276c7947fe59f88159ed7daca907c8e939607d8433f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/294ef4bdf8bbaacd1c175276c7947fe59f88159ed7daca907c8e939607d8433f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/294ef4bdf8bbaacd1c175276c7947fe59f88159ed7daca907c8e939607d8433f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-358521",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-358521/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-358521",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-358521",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-358521",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9129f41a7c4720de72ac6f847ed5831d36a18c33c192e74f6d430a6c2ba1d4f9",
	            "SandboxKey": "/var/run/docker/netns/9129f41a7c47",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33853"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33854"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33857"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33855"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33856"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-358521": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:e9:4c:d9:35:3d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "551d951d21be2de7f92b5cdbab7af8ef8b7b661920c518b5b155e983f73740a9",
	                    "EndpointID": "0508799a408cb1823a1c8067822c6b5f7a9180c8101399c47a07fa8284614d69",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-358521",
	                        "f07d6857d9ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-358521 -n default-k8s-diff-port-358521
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-358521 -n default-k8s-diff-port-358521: exit status 2 (572.626303ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-358521 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-358521 logs -n 25: (1.445217405s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ pause   │ -p embed-certs-664524 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-664524                │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ delete  │ -p embed-certs-664524                                                                                                                                                                                                                         │ embed-certs-664524                │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ delete  │ -p embed-certs-664524                                                                                                                                                                                                                         │ embed-certs-664524                │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ start   │ -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-358521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-358521      │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-358521 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-358521      │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-358521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-358521      │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ start   │ -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-358521      │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 08:00 UTC │
	│ addons  │ enable metrics-server -p newest-cni-921639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │                     │
	│ stop    │ -p newest-cni-921639 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable dashboard -p newest-cni-921639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ start   │ -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ image   │ newest-cni-921639 image list --format=json                                                                                                                                                                                                    │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ pause   │ -p newest-cni-921639 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │                     │
	│ delete  │ -p newest-cni-921639                                                                                                                                                                                                                          │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 08:00 UTC │
	│ delete  │ -p newest-cni-921639                                                                                                                                                                                                                          │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │ 29 Dec 25 08:00 UTC │
	│ start   │ -p test-preload-dl-gcs-273719 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-273719        │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-273719                                                                                                                                                                                                                 │ test-preload-dl-gcs-273719        │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │ 29 Dec 25 08:00 UTC │
	│ start   │ -p test-preload-dl-github-158002 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-158002     │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │                     │
	│ delete  │ -p test-preload-dl-github-158002                                                                                                                                                                                                              │ test-preload-dl-github-158002     │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │ 29 Dec 25 08:00 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-586553 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-586553 │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-586553                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-586553 │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │ 29 Dec 25 08:00 UTC │
	│ start   │ -p auto-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-095300                       │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │                     │
	│ image   │ default-k8s-diff-port-358521 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-358521      │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │ 29 Dec 25 08:00 UTC │
	│ pause   │ -p default-k8s-diff-port-358521 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-358521      │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 08:00:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 08:00:18.188373  956970 out.go:360] Setting OutFile to fd 1 ...
	I1229 08:00:18.188487  956970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 08:00:18.188499  956970 out.go:374] Setting ErrFile to fd 2...
	I1229 08:00:18.188504  956970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 08:00:18.188774  956970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 08:00:18.189330  956970 out.go:368] Setting JSON to false
	I1229 08:00:18.190214  956970 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16970,"bootTime":1766978248,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 08:00:18.190330  956970 start.go:143] virtualization:  
	I1229 08:00:18.193613  956970 out.go:179] * [auto-095300] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 08:00:18.197791  956970 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 08:00:18.197854  956970 notify.go:221] Checking for updates...
	I1229 08:00:18.204297  956970 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 08:00:18.207350  956970 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 08:00:18.210430  956970 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 08:00:18.213421  956970 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 08:00:18.216405  956970 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 08:00:18.219954  956970 config.go:182] Loaded profile config "default-k8s-diff-port-358521": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 08:00:18.220079  956970 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 08:00:18.254111  956970 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 08:00:18.254240  956970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 08:00:18.321310  956970 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 08:00:18.311655119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 08:00:18.321418  956970 docker.go:319] overlay module found
	I1229 08:00:18.324581  956970 out.go:179] * Using the docker driver based on user configuration
	I1229 08:00:18.327487  956970 start.go:309] selected driver: docker
	I1229 08:00:18.327506  956970 start.go:928] validating driver "docker" against <nil>
	I1229 08:00:18.327534  956970 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 08:00:18.328264  956970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 08:00:18.381403  956970 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 08:00:18.371581354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 08:00:18.381551  956970 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 08:00:18.381781  956970 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 08:00:18.384762  956970 out.go:179] * Using Docker driver with root privileges
	I1229 08:00:18.387549  956970 cni.go:84] Creating CNI manager for ""
	I1229 08:00:18.387629  956970 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 08:00:18.387644  956970 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 08:00:18.387723  956970 start.go:353] cluster config:
	{Name:auto-095300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-095300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I1229 08:00:18.390835  956970 out.go:179] * Starting "auto-095300" primary control-plane node in "auto-095300" cluster
	I1229 08:00:18.393573  956970 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 08:00:18.396576  956970 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 08:00:18.399373  956970 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 08:00:18.399441  956970 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 08:00:18.399451  956970 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 08:00:18.399455  956970 cache.go:65] Caching tarball of preloaded images
	I1229 08:00:18.399705  956970 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 08:00:18.399718  956970 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 08:00:18.399838  956970 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/config.json ...
	I1229 08:00:18.399859  956970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/config.json: {Name:mk5443977285c5dc25bc5e5ec0edb2d1634d0f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 08:00:18.419262  956970 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 08:00:18.419287  956970 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 08:00:18.419305  956970 cache.go:243] Successfully downloaded all kic artifacts
	I1229 08:00:18.419337  956970 start.go:360] acquireMachinesLock for auto-095300: {Name:mkb30a242c2c331b541ee4f982fb5f929f73d388 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 08:00:18.419454  956970 start.go:364] duration metric: took 95.672µs to acquireMachinesLock for "auto-095300"
	I1229 08:00:18.419486  956970 start.go:93] Provisioning new machine with config: &{Name:auto-095300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-095300 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 08:00:18.419561  956970 start.go:125] createHost starting for "" (driver="docker")
	I1229 08:00:18.422992  956970 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 08:00:18.423216  956970 start.go:159] libmachine.API.Create for "auto-095300" (driver="docker")
	I1229 08:00:18.423254  956970 client.go:173] LocalClient.Create starting
	I1229 08:00:18.423321  956970 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 08:00:18.423359  956970 main.go:144] libmachine: Decoding PEM data...
	I1229 08:00:18.423384  956970 main.go:144] libmachine: Parsing certificate...
	I1229 08:00:18.423441  956970 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 08:00:18.423464  956970 main.go:144] libmachine: Decoding PEM data...
	I1229 08:00:18.423479  956970 main.go:144] libmachine: Parsing certificate...
	I1229 08:00:18.423850  956970 cli_runner.go:164] Run: docker network inspect auto-095300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 08:00:18.440205  956970 cli_runner.go:211] docker network inspect auto-095300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 08:00:18.440293  956970 network_create.go:284] running [docker network inspect auto-095300] to gather additional debugging logs...
	I1229 08:00:18.440310  956970 cli_runner.go:164] Run: docker network inspect auto-095300
	W1229 08:00:18.456527  956970 cli_runner.go:211] docker network inspect auto-095300 returned with exit code 1
	I1229 08:00:18.456560  956970 network_create.go:287] error running [docker network inspect auto-095300]: docker network inspect auto-095300: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-095300 not found
	I1229 08:00:18.456573  956970 network_create.go:289] output of [docker network inspect auto-095300]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-095300 not found
	
	** /stderr **
	I1229 08:00:18.456688  956970 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 08:00:18.473190  956970 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
	I1229 08:00:18.473595  956970 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce89f48ecd30 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:ee:74:81:93:1a} reservation:<nil>}
	I1229 08:00:18.473942  956970 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e2ff2b4ebfe IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:c7:24:a7:9b:5d} reservation:<nil>}
	I1229 08:00:18.474249  956970 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-551d951d21be IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:8f:e0:e3:ee:f1} reservation:<nil>}
	I1229 08:00:18.474737  956970 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a52810}
	I1229 08:00:18.474759  956970 network_create.go:124] attempt to create docker network auto-095300 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 08:00:18.474813  956970 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-095300 auto-095300
	I1229 08:00:18.545286  956970 network_create.go:108] docker network auto-095300 192.168.85.0/24 created
	I1229 08:00:18.545320  956970 kic.go:121] calculated static IP "192.168.85.2" for the "auto-095300" container
	I1229 08:00:18.545410  956970 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 08:00:18.562887  956970 cli_runner.go:164] Run: docker volume create auto-095300 --label name.minikube.sigs.k8s.io=auto-095300 --label created_by.minikube.sigs.k8s.io=true
	I1229 08:00:18.580171  956970 oci.go:103] Successfully created a docker volume auto-095300
	I1229 08:00:18.580265  956970 cli_runner.go:164] Run: docker run --rm --name auto-095300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-095300 --entrypoint /usr/bin/test -v auto-095300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 08:00:20.688726  956970 cli_runner.go:217] Completed: docker run --rm --name auto-095300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-095300 --entrypoint /usr/bin/test -v auto-095300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib: (2.108410022s)
	I1229 08:00:20.688757  956970 oci.go:107] Successfully prepared a docker volume auto-095300
	I1229 08:00:20.688841  956970 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 08:00:20.688853  956970 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 08:00:20.689023  956970 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-095300:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Dec 29 08:00:01 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:01.520752998Z" level=info msg="Error loading conmon cgroup of container 44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518: cgroup deleted" id=9eeec0e1-1970-4817-be35-dd6a675e2f41 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 08:00:01 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:01.52672925Z" level=info msg="Removed container 44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j/dashboard-metrics-scraper" id=9eeec0e1-1970-4817-be35-dd6a675e2f41 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 08:00:02 default-k8s-diff-port-358521 conmon[1168]: conmon 62f3c1daec0de7e68473 <ninfo>: container 1178 exited with status 1
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.53340252Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c6e14c5a-8e41-469b-8698-e2a40b9ea7ed name=/runtime.v1.ImageService/ImageStatus
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.541292831Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cefa6655-f6b5-458d-a867-ec791c75b05f name=/runtime.v1.ImageService/ImageStatus
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.548208586Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=dc52f244-9b04-4679-bea9-8e62cb57322d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.549966417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.581234865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.581427194Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/93f38e1d11b72466bf25180e7e261e94a99ccaa24429e1a2f42e3843b6fd7d25/merged/etc/passwd: no such file or directory"
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.581447806Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/93f38e1d11b72466bf25180e7e261e94a99ccaa24429e1a2f42e3843b6fd7d25/merged/etc/group: no such file or directory"
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.581733199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.616254201Z" level=info msg="Created container 25b925b3c5f3260a3fc0825cdbdf6e7508f598523f6dbf7f1f6d2c4ba07d5bf3: kube-system/storage-provisioner/storage-provisioner" id=dc52f244-9b04-4679-bea9-8e62cb57322d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.617520353Z" level=info msg="Starting container: 25b925b3c5f3260a3fc0825cdbdf6e7508f598523f6dbf7f1f6d2c4ba07d5bf3" id=8c3e3152-8275-49a9-9a21-bb20e5cfd55d name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.621315657Z" level=info msg="Started container" PID=1680 containerID=25b925b3c5f3260a3fc0825cdbdf6e7508f598523f6dbf7f1f6d2c4ba07d5bf3 description=kube-system/storage-provisioner/storage-provisioner id=8c3e3152-8275-49a9-9a21-bb20e5cfd55d name=/runtime.v1.RuntimeService/StartContainer sandboxID=fb07d98832b8e01508ad8339187d56c9bb2411efef7b077b64d3f16f4c4984d0
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.126457772Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.126955154Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.131795956Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.131962242Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.136701677Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.136902852Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.141690812Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.141725372Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.141754057Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.146499711Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.146536717Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	25b925b3c5f32       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   fb07d98832b8e       storage-provisioner                                    kube-system
	014a6fbd40613       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago       Exited              dashboard-metrics-scraper   2                   133e76272f219       dashboard-metrics-scraper-867fb5f87b-wkf4j             kubernetes-dashboard
	c049649404149       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   bf17774a5482b       kubernetes-dashboard-b84665fb8-jn77m                   kubernetes-dashboard
	31190936643b3       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   ce25232980087       busybox                                                default
	fbaf43378e435       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           53 seconds ago       Running             coredns                     1                   8cd78b22644a1       coredns-7d764666f9-jv9qg                               kube-system
	96d0aec28d967       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           53 seconds ago       Running             kube-proxy                  1                   37b24c50b5eda       kube-proxy-58t84                                       kube-system
	62f3c1daec0de       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   fb07d98832b8e       storage-provisioner                                    kube-system
	72fa2c332e12b       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           53 seconds ago       Running             kindnet-cni                 1                   bb361d1d3ae59       kindnet-x9pdb                                          kube-system
	a4f88aac9f2db       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   63025da060c28       kube-apiserver-default-k8s-diff-port-358521            kube-system
	11aa31a91a49f       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   8bbf4e7feb24d       kube-scheduler-default-k8s-diff-port-358521            kube-system
	8e26f46b068e6       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   e493c23e26b65       kube-controller-manager-default-k8s-diff-port-358521   kube-system
	6c80f6b65c94c       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   d4d72dd5bbda4       etcd-default-k8s-diff-port-358521                      kube-system
	
	
	==> coredns [fbaf43378e43520166b5d58421eceaf0fd7e3220c28778395aa7956fecd5ec61] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53551 - 8654 "HINFO IN 5363829487046422997.8163480252714934540. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017568736s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-358521
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-358521
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=default-k8s-diff-port-358521
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_58_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:58:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-358521
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 08:00:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 08:00:02 +0000   Mon, 29 Dec 2025 07:58:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 08:00:02 +0000   Mon, 29 Dec 2025 07:58:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 08:00:02 +0000   Mon, 29 Dec 2025 07:58:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 08:00:02 +0000   Mon, 29 Dec 2025 07:58:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-358521
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                9d844aca-8eb8-4646-88f8-c1cdf6fd6261
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-7d764666f9-jv9qg                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-default-k8s-diff-port-358521                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-x9pdb                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-default-k8s-diff-port-358521             250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-358521    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-58t84                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-default-k8s-diff-port-358521             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-wkf4j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-jn77m                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  114s  node-controller  Node default-k8s-diff-port-358521 event: Registered Node default-k8s-diff-port-358521 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node default-k8s-diff-port-358521 event: Registered Node default-k8s-diff-port-358521 in Controller
	
	
	==> dmesg <==
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	[Dec29 07:54] overlayfs: idmapped layers are currently not supported
	[Dec29 07:55] overlayfs: idmapped layers are currently not supported
	[Dec29 07:56] overlayfs: idmapped layers are currently not supported
	[Dec29 07:57] overlayfs: idmapped layers are currently not supported
	[Dec29 07:58] overlayfs: idmapped layers are currently not supported
	[Dec29 07:59] overlayfs: idmapped layers are currently not supported
	[ +10.773618] overlayfs: idmapped layers are currently not supported
	[ +19.528466] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6c80f6b65c94ccf99cdc68674c3619fa3cbd4d5629edd42feaebf1f5d3c39436] <==
	{"level":"info","ts":"2025-12-29T07:59:25.381059Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:59:25.381127Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:59:25.381380Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-29T07:59:25.381423Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-29T07:59:25.382290Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-29T07:59:25.384968Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:59:25.385160Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:59:25.828857Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:25.828924Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:25.828965Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:25.828977Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:59:25.828992Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:25.832856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:25.832907Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:59:25.832928Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:25.832937Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:25.847303Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:59:25.848260Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:59:25.855563Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:59:25.855852Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:59:25.856578Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:59:25.875219Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-29T07:59:25.896902Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:59:25.896951Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:59:25.847277Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-358521 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	
	
	==> kernel <==
	 08:00:26 up  4:42,  0 user,  load average: 3.54, 2.67, 2.46
	Linux default-k8s-diff-port-358521 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [72fa2c332e12bff08025a682a41e88670c82aabcb4025ed422bad209da0f1393] <==
	I1229 07:59:32.863275       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:59:32.864946       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1229 07:59:32.865104       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:59:32.865118       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:59:32.865129       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:59:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:59:33.114301       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:59:33.114341       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:59:33.114354       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:59:33.120463       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1229 08:00:03.113848       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1229 08:00:03.115272       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1229 08:00:03.121111       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1229 08:00:03.121226       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1229 08:00:04.815326       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 08:00:04.815494       1 metrics.go:72] Registering metrics
	I1229 08:00:04.815615       1 controller.go:711] "Syncing nftables rules"
	I1229 08:00:13.116180       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 08:00:13.116914       1 main.go:301] handling current node
	I1229 08:00:23.116907       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 08:00:23.116952       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a4f88aac9f2dbaaf6365f7c9256e299eb2b16499b16a628127e27a0b5b3fc4bb] <==
	I1229 07:59:31.573597       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1229 07:59:31.573624       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1229 07:59:31.580465       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:59:31.580618       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1229 07:59:31.581489       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:59:31.581653       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1229 07:59:31.589902       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:59:31.596335       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:59:31.604735       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:59:31.605902       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:59:31.606330       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:31.606340       1 policy_source.go:248] refreshing policies
	I1229 07:59:31.615098       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:59:31.620773       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1229 07:59:31.632964       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:59:31.983927       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:59:33.100122       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:59:33.221912       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:59:33.277926       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:59:33.295152       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:59:33.487525       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.162.51"}
	I1229 07:59:33.590371       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.2.202"}
	I1229 07:59:35.508959       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:59:35.704994       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:59:35.748848       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8e26f46b068e60b4d589559c907f515b0d55f5afe2ad46b7724291d57e19c3b0] <==
	I1229 07:59:35.125177       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1229 07:59:35.125270       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-358521"
	I1229 07:59:35.125316       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125344       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125394       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125434       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1229 07:59:35.125494       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125726       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125795       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125835       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125891       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.126089       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.126262       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.126558       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.126570       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.126795       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.126804       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.135513       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.135546       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.143168       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.143272       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.144371       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:59:35.144401       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:59:35.161992       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.178020       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [96d0aec28d96721f81b355efebc44fd6738930d8643fd08085fbd14de6def54f] <==
	I1229 07:59:33.546767       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:59:33.630855       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:59:33.733127       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:33.733182       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1229 07:59:33.733287       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:59:33.759428       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:59:33.759494       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:59:33.772505       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:59:33.772894       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:59:33.772921       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:59:33.773974       1 config.go:200] "Starting service config controller"
	I1229 07:59:33.773999       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:59:33.774997       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:59:33.775010       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:59:33.775026       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:59:33.775029       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:59:33.775541       1 config.go:309] "Starting node config controller"
	I1229 07:59:33.775548       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:59:33.775555       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:59:33.893131       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:59:33.893166       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:59:33.874095       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [11aa31a91a49f940abda8a1b7073c7d96f30401c3f0cdb07575481baf5debcf6] <==
	I1229 07:59:31.115742       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:59:31.151302       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:59:31.160548       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:59:31.160595       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:59:31.161395       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1229 07:59:31.447932       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:59:31.448123       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1229 07:59:31.448202       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:59:31.448290       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:59:31.448354       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:59:31.448441       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:59:31.448531       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:59:31.448587       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:59:31.448685       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:59:31.448766       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:59:31.448828       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:59:31.448902       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:59:31.448939       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:59:31.448988       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:59:31.449079       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:59:31.449160       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:59:31.449224       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:59:31.472485       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:59:31.490453       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1229 07:59:32.382010       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:59:49 default-k8s-diff-port-358521 kubelet[791]: I1229 07:59:49.523971     791 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-jn77m" podStartSLOduration=9.131667398 podStartE2EDuration="14.522001109s" podCreationTimestamp="2025-12-29 07:59:35 +0000 UTC" firstStartedPulling="2025-12-29 07:59:36.036148442 +0000 UTC m=+12.302497944" lastFinishedPulling="2025-12-29 07:59:41.426482153 +0000 UTC m=+17.692831655" observedRunningTime="2025-12-29 07:59:42.453768555 +0000 UTC m=+18.720118065" watchObservedRunningTime="2025-12-29 07:59:49.522001109 +0000 UTC m=+25.788350611"
	Dec 29 07:59:50 default-k8s-diff-port-358521 kubelet[791]: E1229 07:59:50.480421     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" containerName="dashboard-metrics-scraper"
	Dec 29 07:59:50 default-k8s-diff-port-358521 kubelet[791]: I1229 07:59:50.480461     791 scope.go:122] "RemoveContainer" containerID="44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518"
	Dec 29 07:59:50 default-k8s-diff-port-358521 kubelet[791]: E1229 07:59:50.480634     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wkf4j_kubernetes-dashboard(094b8a3b-5891-43ea-905f-7fb8358ae0cf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" podUID="094b8a3b-5891-43ea-905f-7fb8358ae0cf"
	Dec 29 07:59:50 default-k8s-diff-port-358521 kubelet[791]: I1229 07:59:50.483135     791 scope.go:122] "RemoveContainer" containerID="660710d09b28df319e2be3e62d01449f631a44106b5ceca801e7ef5118b434a0"
	Dec 29 07:59:51 default-k8s-diff-port-358521 kubelet[791]: E1229 07:59:51.483638     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" containerName="dashboard-metrics-scraper"
	Dec 29 07:59:51 default-k8s-diff-port-358521 kubelet[791]: I1229 07:59:51.483683     791 scope.go:122] "RemoveContainer" containerID="44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518"
	Dec 29 07:59:51 default-k8s-diff-port-358521 kubelet[791]: E1229 07:59:51.483871     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wkf4j_kubernetes-dashboard(094b8a3b-5891-43ea-905f-7fb8358ae0cf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" podUID="094b8a3b-5891-43ea-905f-7fb8358ae0cf"
	Dec 29 07:59:55 default-k8s-diff-port-358521 kubelet[791]: E1229 07:59:55.990626     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" containerName="dashboard-metrics-scraper"
	Dec 29 07:59:55 default-k8s-diff-port-358521 kubelet[791]: I1229 07:59:55.990683     791 scope.go:122] "RemoveContainer" containerID="44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518"
	Dec 29 07:59:55 default-k8s-diff-port-358521 kubelet[791]: E1229 07:59:55.990901     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wkf4j_kubernetes-dashboard(094b8a3b-5891-43ea-905f-7fb8358ae0cf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" podUID="094b8a3b-5891-43ea-905f-7fb8358ae0cf"
	Dec 29 08:00:00 default-k8s-diff-port-358521 kubelet[791]: E1229 08:00:00.940658     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" containerName="dashboard-metrics-scraper"
	Dec 29 08:00:00 default-k8s-diff-port-358521 kubelet[791]: I1229 08:00:00.941453     791 scope.go:122] "RemoveContainer" containerID="44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518"
	Dec 29 08:00:01 default-k8s-diff-port-358521 kubelet[791]: I1229 08:00:01.510066     791 scope.go:122] "RemoveContainer" containerID="44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518"
	Dec 29 08:00:02 default-k8s-diff-port-358521 kubelet[791]: E1229 08:00:02.515117     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" containerName="dashboard-metrics-scraper"
	Dec 29 08:00:02 default-k8s-diff-port-358521 kubelet[791]: I1229 08:00:02.515160     791 scope.go:122] "RemoveContainer" containerID="014a6fbd40613e6687d1b0ae5e329715fa384da69d3ac905f2bf2c519346ad18"
	Dec 29 08:00:02 default-k8s-diff-port-358521 kubelet[791]: E1229 08:00:02.515519     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wkf4j_kubernetes-dashboard(094b8a3b-5891-43ea-905f-7fb8358ae0cf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" podUID="094b8a3b-5891-43ea-905f-7fb8358ae0cf"
	Dec 29 08:00:03 default-k8s-diff-port-358521 kubelet[791]: I1229 08:00:03.524320     791 scope.go:122] "RemoveContainer" containerID="62f3c1daec0de7e6847351cf93d95f1216d5c1fe55f0553718f8ff8c6c10db3c"
	Dec 29 08:00:05 default-k8s-diff-port-358521 kubelet[791]: E1229 08:00:05.990129     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" containerName="dashboard-metrics-scraper"
	Dec 29 08:00:05 default-k8s-diff-port-358521 kubelet[791]: I1229 08:00:05.990173     791 scope.go:122] "RemoveContainer" containerID="014a6fbd40613e6687d1b0ae5e329715fa384da69d3ac905f2bf2c519346ad18"
	Dec 29 08:00:05 default-k8s-diff-port-358521 kubelet[791]: E1229 08:00:05.990337     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wkf4j_kubernetes-dashboard(094b8a3b-5891-43ea-905f-7fb8358ae0cf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" podUID="094b8a3b-5891-43ea-905f-7fb8358ae0cf"
	Dec 29 08:00:07 default-k8s-diff-port-358521 kubelet[791]: E1229 08:00:07.740931     791 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jv9qg" containerName="coredns"
	Dec 29 08:00:22 default-k8s-diff-port-358521 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 08:00:23 default-k8s-diff-port-358521 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 08:00:23 default-k8s-diff-port-358521 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c049649404149843d7066ee08650f04dc2f03a7a6462d981d4bd64715c8d238b] <==
	2025/12/29 07:59:41 Starting overwatch
	2025/12/29 07:59:41 Using namespace: kubernetes-dashboard
	2025/12/29 07:59:41 Using in-cluster config to connect to apiserver
	2025/12/29 07:59:41 Using secret token for csrf signing
	2025/12/29 07:59:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:59:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:59:41 Successful initial request to the apiserver, version: v1.35.0
	2025/12/29 07:59:41 Generating JWE encryption key
	2025/12/29 07:59:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:59:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:59:43 Initializing JWE encryption key from synchronized object
	2025/12/29 07:59:43 Creating in-cluster Sidecar client
	2025/12/29 07:59:43 Serving insecurely on HTTP port: 9090
	2025/12/29 07:59:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 08:00:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [25b925b3c5f3260a3fc0825cdbdf6e7508f598523f6dbf7f1f6d2c4ba07d5bf3] <==
	I1229 08:00:03.650801       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 08:00:03.666349       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 08:00:03.668902       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 08:00:03.672723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:07.128344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:11.388639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:14.987617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:18.041035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:21.063843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:21.068998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 08:00:21.069208       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 08:00:21.069411       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-358521_087c4789-8b08-4a99-a4a5-0141cd9cda08!
	I1229 08:00:21.070348       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e14c6f9d-8118-4d5d-8fe5-5b8946a8ad35", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-358521_087c4789-8b08-4a99-a4a5-0141cd9cda08 became leader
	W1229 08:00:21.082206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:21.105149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 08:00:21.170527       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-358521_087c4789-8b08-4a99-a4a5-0141cd9cda08!
	W1229 08:00:23.108965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:23.115928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:25.119421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:25.130381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [62f3c1daec0de7e6847351cf93d95f1216d5c1fe55f0553718f8ff8c6c10db3c] <==
	I1229 07:59:32.897939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 08:00:02.900149       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-358521 -n default-k8s-diff-port-358521
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-358521 -n default-k8s-diff-port-358521: exit status 2 (353.397959ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-358521 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-358521
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-358521:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2",
	        "Created": "2025-12-29T07:58:06.602342108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 950300,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:59:15.815569438Z",
	            "FinishedAt": "2025-12-29T07:59:14.53887049Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/hosts",
	        "LogPath": "/var/lib/docker/containers/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2/f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2-json.log",
	        "Name": "/default-k8s-diff-port-358521",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-358521:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-358521",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f07d6857d9ad2c21fa1dfae31ca56c5c6159f3bb01d6ae0a32f58848a2e147b2",
	                "LowerDir": "/var/lib/docker/overlay2/294ef4bdf8bbaacd1c175276c7947fe59f88159ed7daca907c8e939607d8433f-init/diff:/var/lib/docker/overlay2/474975edb20f32e7b9a7a1072386facc47acc10492abfaeb7b8c1c0ab8baf762/diff",
	                "MergedDir": "/var/lib/docker/overlay2/294ef4bdf8bbaacd1c175276c7947fe59f88159ed7daca907c8e939607d8433f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/294ef4bdf8bbaacd1c175276c7947fe59f88159ed7daca907c8e939607d8433f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/294ef4bdf8bbaacd1c175276c7947fe59f88159ed7daca907c8e939607d8433f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-358521",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-358521/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-358521",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-358521",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-358521",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9129f41a7c4720de72ac6f847ed5831d36a18c33c192e74f6d430a6c2ba1d4f9",
	            "SandboxKey": "/var/run/docker/netns/9129f41a7c47",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33853"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33854"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33857"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33855"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33856"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-358521": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:e9:4c:d9:35:3d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "551d951d21be2de7f92b5cdbab7af8ef8b7b661920c518b5b155e983f73740a9",
	                    "EndpointID": "0508799a408cb1823a1c8067822c6b5f7a9180c8101399c47a07fa8284614d69",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-358521",
	                        "f07d6857d9ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-358521 -n default-k8s-diff-port-358521
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-358521 -n default-k8s-diff-port-358521: exit status 2 (345.586722ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-358521 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-358521 logs -n 25: (1.467639315s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ pause   │ -p embed-certs-664524 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-664524                │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ delete  │ -p embed-certs-664524                                                                                                                                                                                                                         │ embed-certs-664524                │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ delete  │ -p embed-certs-664524                                                                                                                                                                                                                         │ embed-certs-664524                │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:58 UTC │
	│ start   │ -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-358521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-358521      │ jenkins │ v1.37.0 │ 29 Dec 25 07:58 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-358521 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-358521      │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-358521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-358521      │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ start   │ -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-358521      │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 08:00 UTC │
	│ addons  │ enable metrics-server -p newest-cni-921639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │                     │
	│ stop    │ -p newest-cni-921639 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ addons  │ enable dashboard -p newest-cni-921639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ start   │ -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ image   │ newest-cni-921639 image list --format=json                                                                                                                                                                                                    │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 07:59 UTC │
	│ pause   │ -p newest-cni-921639 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │                     │
	│ delete  │ -p newest-cni-921639                                                                                                                                                                                                                          │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 07:59 UTC │ 29 Dec 25 08:00 UTC │
	│ delete  │ -p newest-cni-921639                                                                                                                                                                                                                          │ newest-cni-921639                 │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │ 29 Dec 25 08:00 UTC │
	│ start   │ -p test-preload-dl-gcs-273719 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-273719        │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-273719                                                                                                                                                                                                                 │ test-preload-dl-gcs-273719        │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │ 29 Dec 25 08:00 UTC │
	│ start   │ -p test-preload-dl-github-158002 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-158002     │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │                     │
	│ delete  │ -p test-preload-dl-github-158002                                                                                                                                                                                                              │ test-preload-dl-github-158002     │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │ 29 Dec 25 08:00 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-586553 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-586553 │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-586553                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-586553 │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │ 29 Dec 25 08:00 UTC │
	│ start   │ -p auto-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-095300                       │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │                     │
	│ image   │ default-k8s-diff-port-358521 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-358521      │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │ 29 Dec 25 08:00 UTC │
	│ pause   │ -p default-k8s-diff-port-358521 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-358521      │ jenkins │ v1.37.0 │ 29 Dec 25 08:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 08:00:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 08:00:18.188373  956970 out.go:360] Setting OutFile to fd 1 ...
	I1229 08:00:18.188487  956970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 08:00:18.188499  956970 out.go:374] Setting ErrFile to fd 2...
	I1229 08:00:18.188504  956970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 08:00:18.188774  956970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 08:00:18.189330  956970 out.go:368] Setting JSON to false
	I1229 08:00:18.190214  956970 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16970,"bootTime":1766978248,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 08:00:18.190330  956970 start.go:143] virtualization:  
	I1229 08:00:18.193613  956970 out.go:179] * [auto-095300] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 08:00:18.197791  956970 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 08:00:18.197854  956970 notify.go:221] Checking for updates...
	I1229 08:00:18.204297  956970 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 08:00:18.207350  956970 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 08:00:18.210430  956970 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 08:00:18.213421  956970 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 08:00:18.216405  956970 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 08:00:18.219954  956970 config.go:182] Loaded profile config "default-k8s-diff-port-358521": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 08:00:18.220079  956970 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 08:00:18.254111  956970 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 08:00:18.254240  956970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 08:00:18.321310  956970 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 08:00:18.311655119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 08:00:18.321418  956970 docker.go:319] overlay module found
	I1229 08:00:18.324581  956970 out.go:179] * Using the docker driver based on user configuration
	I1229 08:00:18.327487  956970 start.go:309] selected driver: docker
	I1229 08:00:18.327506  956970 start.go:928] validating driver "docker" against <nil>
	I1229 08:00:18.327534  956970 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 08:00:18.328264  956970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 08:00:18.381403  956970 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 08:00:18.371581354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 08:00:18.381551  956970 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 08:00:18.381781  956970 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1229 08:00:18.384762  956970 out.go:179] * Using Docker driver with root privileges
	I1229 08:00:18.387549  956970 cni.go:84] Creating CNI manager for ""
	I1229 08:00:18.387629  956970 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 08:00:18.387644  956970 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 08:00:18.387723  956970 start.go:353] cluster config:
	{Name:auto-095300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-095300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I1229 08:00:18.390835  956970 out.go:179] * Starting "auto-095300" primary control-plane node in "auto-095300" cluster
	I1229 08:00:18.393573  956970 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 08:00:18.396576  956970 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 08:00:18.399373  956970 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 08:00:18.399441  956970 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1229 08:00:18.399451  956970 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 08:00:18.399455  956970 cache.go:65] Caching tarball of preloaded images
	I1229 08:00:18.399705  956970 preload.go:251] Found /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1229 08:00:18.399718  956970 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1229 08:00:18.399838  956970 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/config.json ...
	I1229 08:00:18.399859  956970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/config.json: {Name:mk5443977285c5dc25bc5e5ec0edb2d1634d0f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 08:00:18.419262  956970 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 08:00:18.419287  956970 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 08:00:18.419305  956970 cache.go:243] Successfully downloaded all kic artifacts
	I1229 08:00:18.419337  956970 start.go:360] acquireMachinesLock for auto-095300: {Name:mkb30a242c2c331b541ee4f982fb5f929f73d388 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 08:00:18.419454  956970 start.go:364] duration metric: took 95.672µs to acquireMachinesLock for "auto-095300"
	I1229 08:00:18.419486  956970 start.go:93] Provisioning new machine with config: &{Name:auto-095300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-095300 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1229 08:00:18.419561  956970 start.go:125] createHost starting for "" (driver="docker")
	I1229 08:00:18.422992  956970 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 08:00:18.423216  956970 start.go:159] libmachine.API.Create for "auto-095300" (driver="docker")
	I1229 08:00:18.423254  956970 client.go:173] LocalClient.Create starting
	I1229 08:00:18.423321  956970 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/ca.pem
	I1229 08:00:18.423359  956970 main.go:144] libmachine: Decoding PEM data...
	I1229 08:00:18.423384  956970 main.go:144] libmachine: Parsing certificate...
	I1229 08:00:18.423441  956970 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-739997/.minikube/certs/cert.pem
	I1229 08:00:18.423464  956970 main.go:144] libmachine: Decoding PEM data...
	I1229 08:00:18.423479  956970 main.go:144] libmachine: Parsing certificate...
	I1229 08:00:18.423850  956970 cli_runner.go:164] Run: docker network inspect auto-095300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 08:00:18.440205  956970 cli_runner.go:211] docker network inspect auto-095300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 08:00:18.440293  956970 network_create.go:284] running [docker network inspect auto-095300] to gather additional debugging logs...
	I1229 08:00:18.440310  956970 cli_runner.go:164] Run: docker network inspect auto-095300
	W1229 08:00:18.456527  956970 cli_runner.go:211] docker network inspect auto-095300 returned with exit code 1
	I1229 08:00:18.456560  956970 network_create.go:287] error running [docker network inspect auto-095300]: docker network inspect auto-095300: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-095300 not found
	I1229 08:00:18.456573  956970 network_create.go:289] output of [docker network inspect auto-095300]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-095300 not found
	
	** /stderr **
	I1229 08:00:18.456688  956970 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 08:00:18.473190  956970 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
	I1229 08:00:18.473595  956970 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce89f48ecd30 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:ee:74:81:93:1a} reservation:<nil>}
	I1229 08:00:18.473942  956970 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8e2ff2b4ebfe IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:c7:24:a7:9b:5d} reservation:<nil>}
	I1229 08:00:18.474249  956970 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-551d951d21be IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:8f:e0:e3:ee:f1} reservation:<nil>}
	I1229 08:00:18.474737  956970 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a52810}
	I1229 08:00:18.474759  956970 network_create.go:124] attempt to create docker network auto-095300 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 08:00:18.474813  956970 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-095300 auto-095300
	I1229 08:00:18.545286  956970 network_create.go:108] docker network auto-095300 192.168.85.0/24 created
	I1229 08:00:18.545320  956970 kic.go:121] calculated static IP "192.168.85.2" for the "auto-095300" container
	I1229 08:00:18.545410  956970 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 08:00:18.562887  956970 cli_runner.go:164] Run: docker volume create auto-095300 --label name.minikube.sigs.k8s.io=auto-095300 --label created_by.minikube.sigs.k8s.io=true
	I1229 08:00:18.580171  956970 oci.go:103] Successfully created a docker volume auto-095300
	I1229 08:00:18.580265  956970 cli_runner.go:164] Run: docker run --rm --name auto-095300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-095300 --entrypoint /usr/bin/test -v auto-095300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 08:00:20.688726  956970 cli_runner.go:217] Completed: docker run --rm --name auto-095300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-095300 --entrypoint /usr/bin/test -v auto-095300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib: (2.108410022s)
	I1229 08:00:20.688757  956970 oci.go:107] Successfully prepared a docker volume auto-095300
	I1229 08:00:20.688841  956970 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1229 08:00:20.688853  956970 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 08:00:20.689023  956970 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-095300:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Dec 29 08:00:01 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:01.520752998Z" level=info msg="Error loading conmon cgroup of container 44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518: cgroup deleted" id=9eeec0e1-1970-4817-be35-dd6a675e2f41 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 08:00:01 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:01.52672925Z" level=info msg="Removed container 44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j/dashboard-metrics-scraper" id=9eeec0e1-1970-4817-be35-dd6a675e2f41 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 29 08:00:02 default-k8s-diff-port-358521 conmon[1168]: conmon 62f3c1daec0de7e68473 <ninfo>: container 1178 exited with status 1
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.53340252Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c6e14c5a-8e41-469b-8698-e2a40b9ea7ed name=/runtime.v1.ImageService/ImageStatus
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.541292831Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cefa6655-f6b5-458d-a867-ec791c75b05f name=/runtime.v1.ImageService/ImageStatus
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.548208586Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=dc52f244-9b04-4679-bea9-8e62cb57322d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.549966417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.581234865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.581427194Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/93f38e1d11b72466bf25180e7e261e94a99ccaa24429e1a2f42e3843b6fd7d25/merged/etc/passwd: no such file or directory"
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.581447806Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/93f38e1d11b72466bf25180e7e261e94a99ccaa24429e1a2f42e3843b6fd7d25/merged/etc/group: no such file or directory"
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.581733199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.616254201Z" level=info msg="Created container 25b925b3c5f3260a3fc0825cdbdf6e7508f598523f6dbf7f1f6d2c4ba07d5bf3: kube-system/storage-provisioner/storage-provisioner" id=dc52f244-9b04-4679-bea9-8e62cb57322d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.617520353Z" level=info msg="Starting container: 25b925b3c5f3260a3fc0825cdbdf6e7508f598523f6dbf7f1f6d2c4ba07d5bf3" id=8c3e3152-8275-49a9-9a21-bb20e5cfd55d name=/runtime.v1.RuntimeService/StartContainer
	Dec 29 08:00:03 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:03.621315657Z" level=info msg="Started container" PID=1680 containerID=25b925b3c5f3260a3fc0825cdbdf6e7508f598523f6dbf7f1f6d2c4ba07d5bf3 description=kube-system/storage-provisioner/storage-provisioner id=8c3e3152-8275-49a9-9a21-bb20e5cfd55d name=/runtime.v1.RuntimeService/StartContainer sandboxID=fb07d98832b8e01508ad8339187d56c9bb2411efef7b077b64d3f16f4c4984d0
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.126457772Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.126955154Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.131795956Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.131962242Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.136701677Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.136902852Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.141690812Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.141725372Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.141754057Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.146499711Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 29 08:00:13 default-k8s-diff-port-358521 crio[663]: time="2025-12-29T08:00:13.146536717Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	25b925b3c5f32       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   fb07d98832b8e       storage-provisioner                                    kube-system
	014a6fbd40613       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago       Exited              dashboard-metrics-scraper   2                   133e76272f219       dashboard-metrics-scraper-867fb5f87b-wkf4j             kubernetes-dashboard
	c049649404149       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   bf17774a5482b       kubernetes-dashboard-b84665fb8-jn77m                   kubernetes-dashboard
	31190936643b3       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   ce25232980087       busybox                                                default
	fbaf43378e435       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           55 seconds ago       Running             coredns                     1                   8cd78b22644a1       coredns-7d764666f9-jv9qg                               kube-system
	96d0aec28d967       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           55 seconds ago       Running             kube-proxy                  1                   37b24c50b5eda       kube-proxy-58t84                                       kube-system
	62f3c1daec0de       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   fb07d98832b8e       storage-provisioner                                    kube-system
	72fa2c332e12b       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           55 seconds ago       Running             kindnet-cni                 1                   bb361d1d3ae59       kindnet-x9pdb                                          kube-system
	a4f88aac9f2db       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   63025da060c28       kube-apiserver-default-k8s-diff-port-358521            kube-system
	11aa31a91a49f       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   8bbf4e7feb24d       kube-scheduler-default-k8s-diff-port-358521            kube-system
	8e26f46b068e6       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   e493c23e26b65       kube-controller-manager-default-k8s-diff-port-358521   kube-system
	6c80f6b65c94c       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   d4d72dd5bbda4       etcd-default-k8s-diff-port-358521                      kube-system
	
	
	==> coredns [fbaf43378e43520166b5d58421eceaf0fd7e3220c28778395aa7956fecd5ec61] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53551 - 8654 "HINFO IN 5363829487046422997.8163480252714934540. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017568736s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-358521
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-358521
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
	                    minikube.k8s.io/name=default-k8s-diff-port-358521
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_29T07_58_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Dec 2025 07:58:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-358521
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Dec 2025 08:00:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Dec 2025 08:00:02 +0000   Mon, 29 Dec 2025 07:58:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Dec 2025 08:00:02 +0000   Mon, 29 Dec 2025 07:58:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Dec 2025 08:00:02 +0000   Mon, 29 Dec 2025 07:58:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Dec 2025 08:00:02 +0000   Mon, 29 Dec 2025 07:58:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-358521
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 2506c5fd3ba27c830bbf12616951fa01
	  System UUID:                9d844aca-8eb8-4646-88f8-c1cdf6fd6261
	  Boot ID:                    4dffab7a-bfd5-4391-ba10-0663a972ae6a
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-7d764666f9-jv9qg                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 etcd-default-k8s-diff-port-358521                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-x9pdb                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-default-k8s-diff-port-358521             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-358521    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-58t84                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-default-k8s-diff-port-358521             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-wkf4j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-jn77m                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  116s  node-controller  Node default-k8s-diff-port-358521 event: Registered Node default-k8s-diff-port-358521 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node default-k8s-diff-port-358521 event: Registered Node default-k8s-diff-port-358521 in Controller
	
	
	==> dmesg <==
	[Dec29 07:33] overlayfs: idmapped layers are currently not supported
	[Dec29 07:34] overlayfs: idmapped layers are currently not supported
	[  +8.294408] overlayfs: idmapped layers are currently not supported
	[Dec29 07:35] overlayfs: idmapped layers are currently not supported
	[ +15.823602] overlayfs: idmapped layers are currently not supported
	[Dec29 07:36] overlayfs: idmapped layers are currently not supported
	[ +33.251322] overlayfs: idmapped layers are currently not supported
	[Dec29 07:38] overlayfs: idmapped layers are currently not supported
	[Dec29 07:40] overlayfs: idmapped layers are currently not supported
	[Dec29 07:41] overlayfs: idmapped layers are currently not supported
	[Dec29 07:42] overlayfs: idmapped layers are currently not supported
	[Dec29 07:43] overlayfs: idmapped layers are currently not supported
	[Dec29 07:44] overlayfs: idmapped layers are currently not supported
	[Dec29 07:46] overlayfs: idmapped layers are currently not supported
	[Dec29 07:51] overlayfs: idmapped layers are currently not supported
	[ +35.113219] overlayfs: idmapped layers are currently not supported
	[Dec29 07:53] overlayfs: idmapped layers are currently not supported
	[Dec29 07:54] overlayfs: idmapped layers are currently not supported
	[Dec29 07:55] overlayfs: idmapped layers are currently not supported
	[Dec29 07:56] overlayfs: idmapped layers are currently not supported
	[Dec29 07:57] overlayfs: idmapped layers are currently not supported
	[Dec29 07:58] overlayfs: idmapped layers are currently not supported
	[Dec29 07:59] overlayfs: idmapped layers are currently not supported
	[ +10.773618] overlayfs: idmapped layers are currently not supported
	[ +19.528466] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6c80f6b65c94ccf99cdc68674c3619fa3cbd4d5629edd42feaebf1f5d3c39436] <==
	{"level":"info","ts":"2025-12-29T07:59:25.381059Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:59:25.381127Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-29T07:59:25.381380Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-29T07:59:25.381423Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-29T07:59:25.382290Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-29T07:59:25.384968Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-29T07:59:25.385160Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-29T07:59:25.828857Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:25.828924Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:25.828965Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-29T07:59:25.828977Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:59:25.828992Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:25.832856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:25.832907Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-29T07:59:25.832928Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:25.832937Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-29T07:59:25.847303Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:59:25.848260Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:59:25.855563Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-29T07:59:25.855852Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-29T07:59:25.856578Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-29T07:59:25.875219Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-29T07:59:25.896902Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-29T07:59:25.896951Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-29T07:59:25.847277Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-358521 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	
	
	==> kernel <==
	 08:00:28 up  4:43,  0 user,  load average: 3.34, 2.64, 2.45
	Linux default-k8s-diff-port-358521 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [72fa2c332e12bff08025a682a41e88670c82aabcb4025ed422bad209da0f1393] <==
	I1229 07:59:32.863275       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1229 07:59:32.864946       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1229 07:59:32.865104       1 main.go:148] setting mtu 1500 for CNI 
	I1229 07:59:32.865118       1 main.go:178] kindnetd IP family: "ipv4"
	I1229 07:59:32.865129       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-29T07:59:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1229 07:59:33.114301       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1229 07:59:33.114341       1 controller.go:381] "Waiting for informer caches to sync"
	I1229 07:59:33.114354       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1229 07:59:33.120463       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1229 08:00:03.113848       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1229 08:00:03.115272       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1229 08:00:03.121111       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1229 08:00:03.121226       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1229 08:00:04.815326       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1229 08:00:04.815494       1 metrics.go:72] Registering metrics
	I1229 08:00:04.815615       1 controller.go:711] "Syncing nftables rules"
	I1229 08:00:13.116180       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 08:00:13.116914       1 main.go:301] handling current node
	I1229 08:00:23.116907       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1229 08:00:23.116952       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a4f88aac9f2dbaaf6365f7c9256e299eb2b16499b16a628127e27a0b5b3fc4bb] <==
	I1229 07:59:31.573597       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1229 07:59:31.573624       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1229 07:59:31.580465       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1229 07:59:31.580618       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1229 07:59:31.581489       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1229 07:59:31.581653       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1229 07:59:31.589902       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1229 07:59:31.596335       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1229 07:59:31.604735       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1229 07:59:31.605902       1 cache.go:39] Caches are synced for autoregister controller
	I1229 07:59:31.606330       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:31.606340       1 policy_source.go:248] refreshing policies
	I1229 07:59:31.615098       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1229 07:59:31.620773       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1229 07:59:31.632964       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1229 07:59:31.983927       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1229 07:59:33.100122       1 controller.go:667] quota admission added evaluator for: namespaces
	I1229 07:59:33.221912       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1229 07:59:33.277926       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1229 07:59:33.295152       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1229 07:59:33.487525       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.162.51"}
	I1229 07:59:33.590371       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.2.202"}
	I1229 07:59:35.508959       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1229 07:59:35.704994       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1229 07:59:35.748848       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8e26f46b068e60b4d589559c907f515b0d55f5afe2ad46b7724291d57e19c3b0] <==
	I1229 07:59:35.125177       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1229 07:59:35.125270       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-358521"
	I1229 07:59:35.125316       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125344       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125394       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125434       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1229 07:59:35.125494       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125726       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125795       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125835       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.125891       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.126089       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.126262       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.126558       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.126570       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.126795       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.126804       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.135513       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.135546       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.143168       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.143272       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.144371       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1229 07:59:35.144401       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1229 07:59:35.161992       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:35.178020       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [96d0aec28d96721f81b355efebc44fd6738930d8643fd08085fbd14de6def54f] <==
	I1229 07:59:33.546767       1 server_linux.go:53] "Using iptables proxy"
	I1229 07:59:33.630855       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:59:33.733127       1 shared_informer.go:377] "Caches are synced"
	I1229 07:59:33.733182       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1229 07:59:33.733287       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1229 07:59:33.759428       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1229 07:59:33.759494       1 server_linux.go:136] "Using iptables Proxier"
	I1229 07:59:33.772505       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1229 07:59:33.772894       1 server.go:529] "Version info" version="v1.35.0"
	I1229 07:59:33.772921       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:59:33.773974       1 config.go:200] "Starting service config controller"
	I1229 07:59:33.773999       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1229 07:59:33.774997       1 config.go:106] "Starting endpoint slice config controller"
	I1229 07:59:33.775010       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1229 07:59:33.775026       1 config.go:403] "Starting serviceCIDR config controller"
	I1229 07:59:33.775029       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1229 07:59:33.775541       1 config.go:309] "Starting node config controller"
	I1229 07:59:33.775548       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1229 07:59:33.775555       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1229 07:59:33.893131       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1229 07:59:33.893166       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1229 07:59:33.874095       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [11aa31a91a49f940abda8a1b7073c7d96f30401c3f0cdb07575481baf5debcf6] <==
	I1229 07:59:31.115742       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1229 07:59:31.151302       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1229 07:59:31.160548       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1229 07:59:31.160595       1 shared_informer.go:370] "Waiting for caches to sync"
	I1229 07:59:31.161395       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1229 07:59:31.447932       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1229 07:59:31.448123       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1229 07:59:31.448202       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1229 07:59:31.448290       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1229 07:59:31.448354       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1229 07:59:31.448441       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1229 07:59:31.448531       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1229 07:59:31.448587       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1229 07:59:31.448685       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1229 07:59:31.448766       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1229 07:59:31.448828       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1229 07:59:31.448902       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1229 07:59:31.448939       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1229 07:59:31.448988       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1229 07:59:31.449079       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1229 07:59:31.449160       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1229 07:59:31.449224       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1229 07:59:31.472485       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1229 07:59:31.490453       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1229 07:59:32.382010       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 29 07:59:49 default-k8s-diff-port-358521 kubelet[791]: I1229 07:59:49.523971     791 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-jn77m" podStartSLOduration=9.131667398 podStartE2EDuration="14.522001109s" podCreationTimestamp="2025-12-29 07:59:35 +0000 UTC" firstStartedPulling="2025-12-29 07:59:36.036148442 +0000 UTC m=+12.302497944" lastFinishedPulling="2025-12-29 07:59:41.426482153 +0000 UTC m=+17.692831655" observedRunningTime="2025-12-29 07:59:42.453768555 +0000 UTC m=+18.720118065" watchObservedRunningTime="2025-12-29 07:59:49.522001109 +0000 UTC m=+25.788350611"
	Dec 29 07:59:50 default-k8s-diff-port-358521 kubelet[791]: E1229 07:59:50.480421     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" containerName="dashboard-metrics-scraper"
	Dec 29 07:59:50 default-k8s-diff-port-358521 kubelet[791]: I1229 07:59:50.480461     791 scope.go:122] "RemoveContainer" containerID="44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518"
	Dec 29 07:59:50 default-k8s-diff-port-358521 kubelet[791]: E1229 07:59:50.480634     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wkf4j_kubernetes-dashboard(094b8a3b-5891-43ea-905f-7fb8358ae0cf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" podUID="094b8a3b-5891-43ea-905f-7fb8358ae0cf"
	Dec 29 07:59:50 default-k8s-diff-port-358521 kubelet[791]: I1229 07:59:50.483135     791 scope.go:122] "RemoveContainer" containerID="660710d09b28df319e2be3e62d01449f631a44106b5ceca801e7ef5118b434a0"
	Dec 29 07:59:51 default-k8s-diff-port-358521 kubelet[791]: E1229 07:59:51.483638     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" containerName="dashboard-metrics-scraper"
	Dec 29 07:59:51 default-k8s-diff-port-358521 kubelet[791]: I1229 07:59:51.483683     791 scope.go:122] "RemoveContainer" containerID="44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518"
	Dec 29 07:59:51 default-k8s-diff-port-358521 kubelet[791]: E1229 07:59:51.483871     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wkf4j_kubernetes-dashboard(094b8a3b-5891-43ea-905f-7fb8358ae0cf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" podUID="094b8a3b-5891-43ea-905f-7fb8358ae0cf"
	Dec 29 07:59:55 default-k8s-diff-port-358521 kubelet[791]: E1229 07:59:55.990626     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" containerName="dashboard-metrics-scraper"
	Dec 29 07:59:55 default-k8s-diff-port-358521 kubelet[791]: I1229 07:59:55.990683     791 scope.go:122] "RemoveContainer" containerID="44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518"
	Dec 29 07:59:55 default-k8s-diff-port-358521 kubelet[791]: E1229 07:59:55.990901     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wkf4j_kubernetes-dashboard(094b8a3b-5891-43ea-905f-7fb8358ae0cf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" podUID="094b8a3b-5891-43ea-905f-7fb8358ae0cf"
	Dec 29 08:00:00 default-k8s-diff-port-358521 kubelet[791]: E1229 08:00:00.940658     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" containerName="dashboard-metrics-scraper"
	Dec 29 08:00:00 default-k8s-diff-port-358521 kubelet[791]: I1229 08:00:00.941453     791 scope.go:122] "RemoveContainer" containerID="44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518"
	Dec 29 08:00:01 default-k8s-diff-port-358521 kubelet[791]: I1229 08:00:01.510066     791 scope.go:122] "RemoveContainer" containerID="44c32808e1c504bb39e10376b679348564bec8875f0c7c463a8f2b59a9df2518"
	Dec 29 08:00:02 default-k8s-diff-port-358521 kubelet[791]: E1229 08:00:02.515117     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" containerName="dashboard-metrics-scraper"
	Dec 29 08:00:02 default-k8s-diff-port-358521 kubelet[791]: I1229 08:00:02.515160     791 scope.go:122] "RemoveContainer" containerID="014a6fbd40613e6687d1b0ae5e329715fa384da69d3ac905f2bf2c519346ad18"
	Dec 29 08:00:02 default-k8s-diff-port-358521 kubelet[791]: E1229 08:00:02.515519     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wkf4j_kubernetes-dashboard(094b8a3b-5891-43ea-905f-7fb8358ae0cf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" podUID="094b8a3b-5891-43ea-905f-7fb8358ae0cf"
	Dec 29 08:00:03 default-k8s-diff-port-358521 kubelet[791]: I1229 08:00:03.524320     791 scope.go:122] "RemoveContainer" containerID="62f3c1daec0de7e6847351cf93d95f1216d5c1fe55f0553718f8ff8c6c10db3c"
	Dec 29 08:00:05 default-k8s-diff-port-358521 kubelet[791]: E1229 08:00:05.990129     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" containerName="dashboard-metrics-scraper"
	Dec 29 08:00:05 default-k8s-diff-port-358521 kubelet[791]: I1229 08:00:05.990173     791 scope.go:122] "RemoveContainer" containerID="014a6fbd40613e6687d1b0ae5e329715fa384da69d3ac905f2bf2c519346ad18"
	Dec 29 08:00:05 default-k8s-diff-port-358521 kubelet[791]: E1229 08:00:05.990337     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-wkf4j_kubernetes-dashboard(094b8a3b-5891-43ea-905f-7fb8358ae0cf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-wkf4j" podUID="094b8a3b-5891-43ea-905f-7fb8358ae0cf"
	Dec 29 08:00:07 default-k8s-diff-port-358521 kubelet[791]: E1229 08:00:07.740931     791 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jv9qg" containerName="coredns"
	Dec 29 08:00:22 default-k8s-diff-port-358521 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 29 08:00:23 default-k8s-diff-port-358521 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 29 08:00:23 default-k8s-diff-port-358521 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c049649404149843d7066ee08650f04dc2f03a7a6462d981d4bd64715c8d238b] <==
	2025/12/29 07:59:41 Using namespace: kubernetes-dashboard
	2025/12/29 07:59:41 Using in-cluster config to connect to apiserver
	2025/12/29 07:59:41 Using secret token for csrf signing
	2025/12/29 07:59:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/29 07:59:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/29 07:59:41 Successful initial request to the apiserver, version: v1.35.0
	2025/12/29 07:59:41 Generating JWE encryption key
	2025/12/29 07:59:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/29 07:59:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/29 07:59:43 Initializing JWE encryption key from synchronized object
	2025/12/29 07:59:43 Creating in-cluster Sidecar client
	2025/12/29 07:59:43 Serving insecurely on HTTP port: 9090
	2025/12/29 07:59:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 08:00:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/29 07:59:41 Starting overwatch
	
	
	==> storage-provisioner [25b925b3c5f3260a3fc0825cdbdf6e7508f598523f6dbf7f1f6d2c4ba07d5bf3] <==
	I1229 08:00:03.650801       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1229 08:00:03.666349       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1229 08:00:03.668902       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1229 08:00:03.672723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:07.128344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:11.388639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:14.987617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:18.041035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:21.063843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:21.068998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 08:00:21.069208       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1229 08:00:21.069411       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-358521_087c4789-8b08-4a99-a4a5-0141cd9cda08!
	I1229 08:00:21.070348       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e14c6f9d-8118-4d5d-8fe5-5b8946a8ad35", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-358521_087c4789-8b08-4a99-a4a5-0141cd9cda08 became leader
	W1229 08:00:21.082206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:21.105149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1229 08:00:21.170527       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-358521_087c4789-8b08-4a99-a4a5-0141cd9cda08!
	W1229 08:00:23.108965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:23.115928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:25.119421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:25.130381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:27.133550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1229 08:00:27.138434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [62f3c1daec0de7e6847351cf93d95f1216d5c1fe55f0553718f8ff8c6c10db3c] <==
	I1229 07:59:32.897939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1229 08:00:02.900149       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-358521 -n default-k8s-diff-port-358521
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-358521 -n default-k8s-diff-port-358521: exit status 2 (455.443855ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-358521 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.48s)
E1229 08:06:17.350627  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:23.029661  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:23.035388  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:23.045708  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:23.066123  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:23.106432  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:23.186822  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:23.347314  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:23.667903  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:24.308731  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:25.589042  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:26.345049  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:27.686630  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/kindnet-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:27.691867  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/kindnet-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:27.702192  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/kindnet-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:27.722694  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/kindnet-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:27.762967  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/kindnet-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:27.843335  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/kindnet-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:28.003927  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/kindnet-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:28.149204  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:28.324969  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/kindnet-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:28.965984  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/kindnet-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:30.246539  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/kindnet-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:32.807166  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/kindnet-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:33.270092  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:35.102648  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:37.927596  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/kindnet-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:43.292005  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:43.510387  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/auto-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:06:48.167792  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/kindnet-095300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (274/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.44
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 4.58
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.07
18 TestDownloadOnly/v1.35.0/DeleteAll 0.2
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 127.4
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 8.79
48 TestAddons/StoppedEnableDisable 12.49
49 TestCertOptions 30.98
50 TestCertExpiration 226.05
58 TestErrorSpam/setup 26.63
59 TestErrorSpam/start 0.84
60 TestErrorSpam/status 1.15
61 TestErrorSpam/pause 5.76
62 TestErrorSpam/unpause 5.62
63 TestErrorSpam/stop 1.51
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 46.59
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 27.88
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
75 TestFunctional/serial/CacheCmd/cache/add_local 1.3
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.79
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 45.02
84 TestFunctional/serial/ComponentHealth 0.09
85 TestFunctional/serial/LogsCmd 1.47
86 TestFunctional/serial/LogsFileCmd 1.52
87 TestFunctional/serial/InvalidService 4.42
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 10.59
91 TestFunctional/parallel/DryRun 0.73
92 TestFunctional/parallel/InternationalLanguage 0.27
93 TestFunctional/parallel/StatusCmd 1.25
97 TestFunctional/parallel/ServiceCmdConnect 8.64
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 20.86
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.49
104 TestFunctional/parallel/FileSync 0.35
105 TestFunctional/parallel/CertSync 2.64
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
113 TestFunctional/parallel/License 0.3
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.79
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.43
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 8.2
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.44
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
129 TestFunctional/parallel/MountCmd/any-port 7.1
130 TestFunctional/parallel/ServiceCmd/List 0.53
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
133 TestFunctional/parallel/ServiceCmd/Format 0.4
134 TestFunctional/parallel/ServiceCmd/URL 0.5
135 TestFunctional/parallel/MountCmd/specific-port 2.49
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.36
137 TestFunctional/parallel/Version/short 0.11
138 TestFunctional/parallel/Version/components 0.82
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.15
144 TestFunctional/parallel/ImageCommands/Setup 0.63
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.42
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.17
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.09
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.77
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 129.14
163 TestMultiControlPlane/serial/DeployApp 7.91
164 TestMultiControlPlane/serial/PingHostFromPods 1.41
165 TestMultiControlPlane/serial/AddWorkerNode 31.64
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.6
168 TestMultiControlPlane/serial/CopyFile 20.2
169 TestMultiControlPlane/serial/StopSecondaryNode 12.93
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.86
171 TestMultiControlPlane/serial/RestartSecondaryNode 21.1
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.32
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 115.86
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.83
176 TestMultiControlPlane/serial/StopCluster 36.15
177 TestMultiControlPlane/serial/RestartCluster 71.1
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.83
179 TestMultiControlPlane/serial/AddSecondaryNode 51.12
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
185 TestJSONOutput/start/Command 45.31
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.82
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 34.56
211 TestKicCustomNetwork/use_default_bridge_network 29.83
212 TestKicExistingNetwork 29.31
213 TestKicCustomSubnet 30.89
214 TestKicStaticIP 30.35
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 62.9
219 TestMountStart/serial/StartWithMountFirst 8.88
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 8.83
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 8.12
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 74.41
231 TestMultiNode/serial/DeployApp2Nodes 4.92
232 TestMultiNode/serial/PingHostFrom2Pods 0.91
233 TestMultiNode/serial/AddNode 28.9
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.76
236 TestMultiNode/serial/CopyFile 10.73
237 TestMultiNode/serial/StopNode 2.38
238 TestMultiNode/serial/StartAfterStop 8.21
239 TestMultiNode/serial/RestartKeepsNodes 71.76
240 TestMultiNode/serial/DeleteNode 5.51
241 TestMultiNode/serial/StopMultiNode 24.1
242 TestMultiNode/serial/RestartMultiNode 51.2
243 TestMultiNode/serial/ValidateNameConflict 29.98
250 TestScheduledStopUnix 102.52
253 TestInsufficientStorage 12.69
254 TestRunningBinaryUpgrade 63.03
256 TestKubernetesUpgrade 135.1
257 TestMissingContainerUpgrade 117.42
259 TestPause/serial/Start 58.01
260 TestPause/serial/SecondStartNoReconfiguration 39.28
262 TestStoppedBinaryUpgrade/Setup 1.66
263 TestStoppedBinaryUpgrade/Upgrade 70.92
264 TestStoppedBinaryUpgrade/MinikubeLogs 2.65
272 TestPreload/Start-NoPreload-PullImage 70.41
273 TestPreload/Restart-With-Preload-Check-User-Image 48.36
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
277 TestNoKubernetes/serial/StartWithK8s 26.82
278 TestNoKubernetes/serial/StartWithStopK8s 26.93
279 TestNoKubernetes/serial/Start 8.25
280 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
282 TestNoKubernetes/serial/ProfileList 1.04
283 TestNoKubernetes/serial/Stop 1.36
284 TestNoKubernetes/serial/StartNoArgs 7.44
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
293 TestNetworkPlugins/group/false 3.71
298 TestStartStop/group/old-k8s-version/serial/FirstStart 61.65
299 TestStartStop/group/old-k8s-version/serial/DeployApp 9.47
301 TestStartStop/group/old-k8s-version/serial/Stop 12.03
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
303 TestStartStop/group/old-k8s-version/serial/SecondStart 47.67
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
309 TestStartStop/group/no-preload/serial/FirstStart 52.73
310 TestStartStop/group/no-preload/serial/DeployApp 9.32
312 TestStartStop/group/no-preload/serial/Stop 12.01
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
314 TestStartStop/group/no-preload/serial/SecondStart 47.4
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
320 TestStartStop/group/embed-certs/serial/FirstStart 45.92
321 TestStartStop/group/embed-certs/serial/DeployApp 8.33
323 TestStartStop/group/embed-certs/serial/Stop 12.01
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
325 TestStartStop/group/embed-certs/serial/SecondStart 48.6
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.69
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.02
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.15
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
332 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.43
334 TestStartStop/group/newest-cni/serial/FirstStart 33.33
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.24
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.34
338 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.51
339 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/Stop 1.78
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
343 TestStartStop/group/newest-cni/serial/SecondStart 17.08
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
348 TestPreload/PreloadSrc/gcs 4.13
349 TestPreload/PreloadSrc/github 11.7
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
352 TestPreload/PreloadSrc/gcs-cached 0.49
353 TestNetworkPlugins/group/auto/Start 64.3
354 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
356 TestNetworkPlugins/group/kindnet/Start 54.67
357 TestNetworkPlugins/group/auto/KubeletFlags 0.32
358 TestNetworkPlugins/group/auto/NetCatPod 9.31
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/auto/DNS 0.17
361 TestNetworkPlugins/group/auto/Localhost 0.12
362 TestNetworkPlugins/group/auto/HairPin 0.14
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
364 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
365 TestNetworkPlugins/group/kindnet/DNS 0.22
366 TestNetworkPlugins/group/kindnet/Localhost 0.15
367 TestNetworkPlugins/group/kindnet/HairPin 0.17
368 TestNetworkPlugins/group/calico/Start 80.27
369 TestNetworkPlugins/group/custom-flannel/Start 57.97
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.45
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.37
372 TestNetworkPlugins/group/calico/ControllerPod 6
373 TestNetworkPlugins/group/custom-flannel/DNS 0.17
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
376 TestNetworkPlugins/group/calico/KubeletFlags 0.3
377 TestNetworkPlugins/group/calico/NetCatPod 11.27
378 TestNetworkPlugins/group/calico/DNS 0.2
379 TestNetworkPlugins/group/calico/Localhost 0.17
380 TestNetworkPlugins/group/calico/HairPin 0.18
381 TestNetworkPlugins/group/enable-default-cni/Start 75.32
382 TestNetworkPlugins/group/flannel/Start 53.37
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
385 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.29
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.45
387 TestNetworkPlugins/group/flannel/NetCatPod 11.45
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
391 TestNetworkPlugins/group/flannel/DNS 0.21
392 TestNetworkPlugins/group/flannel/Localhost 0.14
393 TestNetworkPlugins/group/flannel/HairPin 0.15
394 TestNetworkPlugins/group/bridge/Start 72.83
395 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
396 TestNetworkPlugins/group/bridge/NetCatPod 9.27
397 TestNetworkPlugins/group/bridge/DNS 0.15
398 TestNetworkPlugins/group/bridge/Localhost 0.12
399 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (9.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-861890 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-861890 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.439291786s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1229 07:09:28.173826  741854 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1229 07:09:28.173904  741854 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-861890
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-861890: exit status 85 (94.034229ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-861890 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-861890 │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:09:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:09:18.778288  741860 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:09:18.778426  741860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:18.778446  741860 out.go:374] Setting ErrFile to fd 2...
	I1229 07:09:18.778453  741860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:18.778747  741860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	W1229 07:09:18.778888  741860 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22353-739997/.minikube/config/config.json: open /home/jenkins/minikube-integration/22353-739997/.minikube/config/config.json: no such file or directory
	I1229 07:09:18.779304  741860 out.go:368] Setting JSON to true
	I1229 07:09:18.780150  741860 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13911,"bootTime":1766978248,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:09:18.780219  741860 start.go:143] virtualization:  
	I1229 07:09:18.786313  741860 out.go:99] [download-only-861890] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:09:18.786603  741860 notify.go:221] Checking for updates...
	W1229 07:09:18.786508  741860 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball: no such file or directory
	I1229 07:09:18.789989  741860 out.go:171] MINIKUBE_LOCATION=22353
	I1229 07:09:18.793432  741860 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:09:18.796775  741860 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:09:18.800059  741860 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:09:18.803341  741860 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1229 07:09:18.809626  741860 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1229 07:09:18.809931  741860 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:09:18.838652  741860 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:09:18.838762  741860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:09:18.907825  741860 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-29 07:09:18.898227828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:09:18.907936  741860 docker.go:319] overlay module found
	I1229 07:09:18.911108  741860 out.go:99] Using the docker driver based on user configuration
	I1229 07:09:18.911144  741860 start.go:309] selected driver: docker
	I1229 07:09:18.911152  741860 start.go:928] validating driver "docker" against <nil>
	I1229 07:09:18.911281  741860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:09:18.965103  741860 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-29 07:09:18.956128926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:09:18.965251  741860 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:09:18.965525  741860 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1229 07:09:18.965677  741860 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 07:09:18.968962  741860 out.go:171] Using Docker driver with root privileges
	I1229 07:09:18.972017  741860 cni.go:84] Creating CNI manager for ""
	I1229 07:09:18.972084  741860 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1229 07:09:18.972097  741860 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1229 07:09:18.972179  741860 start.go:353] cluster config:
	{Name:download-only-861890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-861890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:09:18.975161  741860 out.go:99] Starting "download-only-861890" primary control-plane node in "download-only-861890" cluster
	I1229 07:09:18.975182  741860 cache.go:134] Beginning downloading kic base image for docker with crio
	I1229 07:09:18.978077  741860 out.go:99] Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:09:18.978116  741860 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 07:09:18.978270  741860 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:09:18.993837  741860 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 to local cache
	I1229 07:09:18.994006  741860 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local cache directory
	I1229 07:09:18.994106  741860 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 to local cache
	I1229 07:09:19.029356  741860 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:09:19.029387  741860 cache.go:65] Caching tarball of preloaded images
	I1229 07:09:19.029557  741860 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 07:09:19.032914  741860 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1229 07:09:19.032946  741860 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:09:19.032954  741860 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1229 07:09:19.120135  741860 preload.go:313] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1229 07:09:19.120263  741860 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1229 07:09:22.941757  741860 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1229 07:09:22.942178  741860 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/download-only-861890/config.json ...
	I1229 07:09:22.942230  741860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/download-only-861890/config.json: {Name:mkbefb70ef2f49ab082b5f7731283365ee8b151a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:09:22.942447  741860 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1229 07:09:22.942706  741860 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22353-739997/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-861890 host does not exist
	  To start a cluster, run: "minikube start -p download-only-861890"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-861890
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (4.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-544678 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-544678 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.580094024s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (4.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1229 07:09:33.195006  741854 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I1229 07:09:33.195041  741854 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-544678
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-544678: exit status 85 (64.620213ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-861890 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-861890 │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ delete  │ -p download-only-861890                                                                                                                                                   │ download-only-861890 │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │ 29 Dec 25 07:09 UTC │
	│ start   │ -o=json --download-only -p download-only-544678 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-544678 │ jenkins │ v1.37.0 │ 29 Dec 25 07:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:09:28
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:09:28.661843  742059 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:09:28.661974  742059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:28.661986  742059 out.go:374] Setting ErrFile to fd 2...
	I1229 07:09:28.661992  742059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:09:28.662252  742059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:09:28.662685  742059 out.go:368] Setting JSON to true
	I1229 07:09:28.663508  742059 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13921,"bootTime":1766978248,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:09:28.663586  742059 start.go:143] virtualization:  
	I1229 07:09:28.666918  742059 out.go:99] [download-only-544678] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:09:28.667207  742059 notify.go:221] Checking for updates...
	I1229 07:09:28.670260  742059 out.go:171] MINIKUBE_LOCATION=22353
	I1229 07:09:28.673370  742059 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:09:28.676244  742059 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:09:28.679053  742059 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:09:28.682021  742059 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1229 07:09:28.687781  742059 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1229 07:09:28.688091  742059 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:09:28.720971  742059 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:09:28.721079  742059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:09:28.770067  742059 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-29 07:09:28.761115874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:09:28.770174  742059 docker.go:319] overlay module found
	I1229 07:09:28.773174  742059 out.go:99] Using the docker driver based on user configuration
	I1229 07:09:28.773217  742059 start.go:309] selected driver: docker
	I1229 07:09:28.773224  742059 start.go:928] validating driver "docker" against <nil>
	I1229 07:09:28.773333  742059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:09:28.834325  742059 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-29 07:09:28.825400788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:09:28.834483  742059 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:09:28.834782  742059 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1229 07:09:28.834927  742059 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 07:09:28.838068  742059 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-544678 host does not exist
	  To start a cluster, run: "minikube start -p download-only-544678"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-544678
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I1229 07:09:34.261527  741854 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-084929 --alsologtostderr --binary-mirror http://127.0.0.1:44089 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-084929" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-084929
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-707536
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-707536: exit status 85 (69.627605ms)

                                                
                                                
-- stdout --
	* Profile "addons-707536" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-707536"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-707536
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-707536: exit status 85 (67.959166ms)

                                                
                                                
-- stdout --
	* Profile "addons-707536" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-707536"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (127.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-707536 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-707536 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m7.401228339s)
--- PASS: TestAddons/Setup (127.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-707536 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-707536 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.79s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-707536 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-707536 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [990cc993-f192-4298-95c5-51d8fdb4de77] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [990cc993-f192-4298-95c5-51d8fdb4de77] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004430085s
addons_test.go:696: (dbg) Run:  kubectl --context addons-707536 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-707536 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-707536 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-707536 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.49s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-707536
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-707536: (12.206331435s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-707536
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-707536
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-707536
--- PASS: TestAddons/StoppedEnableDisable (12.49s)

                                                
                                    
x
+
TestCertOptions (30.98s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-600463 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1229 07:51:17.350857  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-600463 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (28.091463542s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-600463 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-600463 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-600463 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-600463" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-600463
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-600463: (2.149667404s)
--- PASS: TestCertOptions (30.98s)

                                                
                                    
x
+
TestCertExpiration (226.05s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-853545 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1229 07:46:17.350095  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-853545 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (28.246991224s)
E1229 07:46:43.293078  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-853545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-853545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (15.298513883s)
helpers_test.go:176: Cleaning up "cert-expiration-853545" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-853545
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-853545: (2.501659113s)
--- PASS: TestCertExpiration (226.05s)

                                                
                                    
x
+
TestErrorSpam/setup (26.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-746392 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-746392 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-746392 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-746392 --driver=docker  --container-runtime=crio: (26.633717697s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (26.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (5.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 pause: exit status 80 (1.872380754s)

                                                
                                                
-- stdout --
	* Pausing node nospam-746392 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:13:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 pause: exit status 80 (1.637778464s)

                                                
                                                
-- stdout --
	* Pausing node nospam-746392 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:13:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 pause: exit status 80 (2.245558798s)

                                                
                                                
-- stdout --
	* Pausing node nospam-746392 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:13:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 unpause: exit status 80 (1.922705922s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-746392 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:13:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 unpause: exit status 80 (2.010800346s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-746392 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:13:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 unpause: exit status 80 (1.684414873s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-746392 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-29T07:13:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.62s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 stop: (1.304681648s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-746392 --log_dir /tmp/nospam-746392 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22353-739997/.minikube/files/etc/test/nested/copy/741854/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-098723 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-098723 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (46.589365456s)
--- PASS: TestFunctional/serial/StartWithProxy (46.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.88s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1229 07:14:47.419847  741854 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-098723 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-098723 --alsologtostderr -v=8: (27.882081375s)
functional_test.go:678: soft start took 27.882568953s for "functional-098723" cluster.
I1229 07:15:15.302193  741854 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (27.88s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-098723 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-098723 cache add registry.k8s.io/pause:3.1: (1.163262918s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-098723 cache add registry.k8s.io/pause:3.3: (1.148269966s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-098723 cache add registry.k8s.io/pause:latest: (1.164767253s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-098723 /tmp/TestFunctionalserialCacheCmdcacheadd_local2973914536/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 cache add minikube-local-cache-test:functional-098723
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 cache delete minikube-local-cache-test:functional-098723
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-098723
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-098723 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (289.920035ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 kubectl -- --context functional-098723 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-098723 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-098723 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-098723 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.016604826s)
functional_test.go:776: restart took 45.016712946s for "functional-098723" cluster.
I1229 07:16:07.854095  741854 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (45.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-098723 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-098723 logs: (1.473698913s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 logs --file /tmp/TestFunctionalserialLogsFileCmd1769686449/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-098723 logs --file /tmp/TestFunctionalserialLogsFileCmd1769686449/001/logs.txt: (1.520863524s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-098723 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-098723
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-098723: exit status 115 (384.246639ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30519 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-098723 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-098723 config get cpus: exit status 14 (89.841282ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-098723 config get cpus: exit status 14 (60.569715ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-098723 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-098723 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 766520: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-098723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-098723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (217.364848ms)

                                                
                                                
-- stdout --
	* [functional-098723] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:16:46.924905  765937 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:16:46.925047  765937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:46.925061  765937 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:46.925067  765937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:46.925454  765937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:16:46.926072  765937 out.go:368] Setting JSON to false
	I1229 07:16:46.927028  765937 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14359,"bootTime":1766978248,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:16:46.927100  765937 start.go:143] virtualization:  
	I1229 07:16:46.930506  765937 out.go:179] * [functional-098723] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:16:46.933441  765937 notify.go:221] Checking for updates...
	I1229 07:16:46.934310  765937 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:16:46.937988  765937 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:16:46.940904  765937 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:16:46.943812  765937 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:16:46.947544  765937 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:16:46.951152  765937 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:16:46.954559  765937 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:46.955140  765937 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:16:46.987247  765937 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:16:46.987354  765937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:16:47.070590  765937 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:16:47.06051226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:16:47.070694  765937 docker.go:319] overlay module found
	I1229 07:16:47.073762  765937 out.go:179] * Using the docker driver based on existing profile
	I1229 07:16:47.076577  765937 start.go:309] selected driver: docker
	I1229 07:16:47.076595  765937 start.go:928] validating driver "docker" against &{Name:functional-098723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-098723 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:47.076708  765937 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:16:47.080370  765937 out.go:203] 
	W1229 07:16:47.083337  765937 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1229 07:16:47.086178  765937 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-098723 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-098723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-098723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (272.131503ms)

                                                
                                                
-- stdout --
	* [functional-098723] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:16:46.685051  765831 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:16:46.685209  765831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:46.685238  765831 out.go:374] Setting ErrFile to fd 2...
	I1229 07:16:46.685258  765831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:16:46.685733  765831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:16:46.686271  765831 out.go:368] Setting JSON to false
	I1229 07:16:46.687405  765831 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14359,"bootTime":1766978248,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:16:46.687483  765831 start.go:143] virtualization:  
	I1229 07:16:46.691983  765831 out.go:179] * [functional-098723] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1229 07:16:46.695002  765831 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:16:46.695164  765831 notify.go:221] Checking for updates...
	I1229 07:16:46.700855  765831 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:16:46.703860  765831 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:16:46.706773  765831 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:16:46.709602  765831 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:16:46.712564  765831 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:16:46.715873  765831 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:16:46.716535  765831 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:16:46.771012  765831 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:16:46.771116  765831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:16:46.853907  765831 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-29 07:16:46.837856677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:16:46.854010  765831 docker.go:319] overlay module found
	I1229 07:16:46.857481  765831 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1229 07:16:46.860358  765831 start.go:309] selected driver: docker
	I1229 07:16:46.860380  765831 start.go:928] validating driver "docker" against &{Name:functional-098723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-098723 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:16:46.860480  765831 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:16:46.864085  765831 out.go:203] 
	W1229 07:16:46.867054  765831 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1229 07:16:46.869955  765831 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-098723 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-098723 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-hhlml" [eaab12a0-b0d5-4f30-b8cd-5c21f7715e7c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-hhlml" [eaab12a0-b0d5-4f30-b8cd-5c21f7715e7c] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003669468s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31022
functional_test.go:1685: http://192.168.49.2:31022: success! body:
Request served by hello-node-connect-5d95464fd4-hhlml

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31022
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [db2b4cec-de37-4c40-9987-972bbf9ec644] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003059943s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-098723 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-098723 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-098723 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-098723 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [1337dc2c-341b-4e44-8b2c-cf59e00c7ad7] Pending
helpers_test.go:353: "sp-pod" [1337dc2c-341b-4e44-8b2c-cf59e00c7ad7] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003582207s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-098723 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-098723 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-098723 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [0fbccb71-8f1a-42e9-a143-4cf76774e2a1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [0fbccb71-8f1a-42e9-a143-4cf76774e2a1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002971342s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-098723 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.86s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh -n functional-098723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 cp functional-098723:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2082189834/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh -n functional-098723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh -n functional-098723 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/741854/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "sudo cat /etc/test/nested/copy/741854/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/741854.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "sudo cat /etc/ssl/certs/741854.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/741854.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "sudo cat /usr/share/ca-certificates/741854.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/7418542.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "sudo cat /etc/ssl/certs/7418542.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/7418542.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "sudo cat /usr/share/ca-certificates/7418542.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-098723 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-098723 ssh "sudo systemctl is-active docker": exit status 1 (379.603067ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-098723 ssh "sudo systemctl is-active containerd": exit status 1 (344.728857ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-098723 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-098723 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-098723 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-098723 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 763657: os: process already finished
helpers_test.go:526: unable to kill pid 763447: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-098723 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-098723 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [b9f1104d-ecd7-4f20-9d85-915258262e71] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [b9f1104d-ecd7-4f20-9d85-915258262e71] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003683286s
I1229 07:16:25.783160  741854 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-098723 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.4.45 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-098723 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-098723 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-098723 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-g2n2n" [87c7faad-2413-4afd-b30f-49b72865f867] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-g2n2n" [87c7faad-2413-4afd-b30f-49b72865f867] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003345759s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "382.534079ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "55.353869ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "356.661878ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "50.978579ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-098723 /tmp/TestFunctionalparallelMountCmdany-port2160046717/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766992599993404451" to /tmp/TestFunctionalparallelMountCmdany-port2160046717/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766992599993404451" to /tmp/TestFunctionalparallelMountCmdany-port2160046717/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766992599993404451" to /tmp/TestFunctionalparallelMountCmdany-port2160046717/001/test-1766992599993404451
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-098723 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (366.895999ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1229 07:16:40.360591  741854 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 29 07:16 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 29 07:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 29 07:16 test-1766992599993404451
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh cat /mount-9p/test-1766992599993404451
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-098723 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [d46900ac-bfb0-416e-99c9-6a692468ca12] Pending
helpers_test.go:353: "busybox-mount" [d46900ac-bfb0-416e-99c9-6a692468ca12] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [d46900ac-bfb0-416e-99c9-6a692468ca12] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [d46900ac-bfb0-416e-99c9-6a692468ca12] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.005383539s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-098723 logs busybox-mount
E1229 07:16:45.856263  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-098723 /tmp/TestFunctionalparallelMountCmdany-port2160046717/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 service list
E1229 07:16:43.291299  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:16:43.301875  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:16:43.312554  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:16:43.332930  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:16:43.373343  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:16:43.453761  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 service list -o json
E1229 07:16:43.614311  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:16:43.934664  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1509: Took "551.400573ms" to run "out/minikube-linux-arm64 -p functional-098723 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:30780
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 service hello-node --url --format={{.IP}}
E1229 07:16:44.575520  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:30780
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-098723 /tmp/TestFunctionalparallelMountCmdspecific-port2603116810/001:/mount-9p --alsologtostderr -v=1 --port 40989]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-098723 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (476.964328ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1229 07:16:47.574837  741854 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "findmnt -T /mount-9p | grep 9p"
E1229 07:16:48.417315  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-098723 /tmp/TestFunctionalparallelMountCmdspecific-port2603116810/001:/mount-9p --alsologtostderr -v=1 --port 40989] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-098723 ssh "sudo umount -f /mount-9p": exit status 1 (437.10629ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-098723 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-098723 /tmp/TestFunctionalparallelMountCmdspecific-port2603116810/001:/mount-9p --alsologtostderr -v=1 --port 40989] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-098723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup327634745/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-098723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup327634745/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-098723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup327634745/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-098723 ssh "findmnt -T" /mount1: exit status 1 (889.399669ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-098723 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-098723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup327634745/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-098723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup327634745/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-098723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup327634745/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-098723 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-098723
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-098723 image ls --format short --alsologtostderr:
I1229 07:17:02.211095  768587 out.go:360] Setting OutFile to fd 1 ...
I1229 07:17:02.211586  768587 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:17:02.211602  768587 out.go:374] Setting ErrFile to fd 2...
I1229 07:17:02.211609  768587 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:17:02.212032  768587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
I1229 07:17:02.213134  768587 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 07:17:02.213298  768587 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 07:17:02.214069  768587 cli_runner.go:164] Run: docker container inspect functional-098723 --format={{.State.Status}}
I1229 07:17:02.235746  768587 ssh_runner.go:195] Run: systemctl --version
I1229 07:17:02.235809  768587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-098723
I1229 07:17:02.261683  768587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/functional-098723/id_rsa Username:docker}
I1229 07:17:02.377922  768587 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
W1229 07:17:02.425841  768587 root.go:91] failed to log command end to audit: failed to find a log row with id equals to a20550fb-bd82-41d5-a5f0-efc09b8be983
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-098723 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 1611cd07b61d5 │ 3.77MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-098723                     │ ce2d2cda2d858 │ 4.79MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ ce2d2cda2d858 │ 4.79MB │
│ localhost/minikube-local-cache-test               │ functional-098723                     │ f53ae4b20aa92 │ 3.33kB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 271e49a0ebc56 │ 60.9MB │
│ registry.k8s.io/pause                             │ 3.3                                   │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ c96ee3c174987 │ 108MB  │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/pause                             │ 3.10.1                                │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ c3fcf259c473a │ 85MB   │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ de369f46c2ff5 │ 74.1MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ ba04bb24b9575 │ 29MB   │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ 962dbbc0e55ec │ 55.1MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ ddc8422d4d35a │ 49.8MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                             │ latest                                │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 88898f1d1a62a │ 72.2MB │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-098723 image ls --format table --alsologtostderr:
I1229 07:17:02.946279  768772 out.go:360] Setting OutFile to fd 1 ...
I1229 07:17:02.946886  768772 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:17:02.946938  768772 out.go:374] Setting ErrFile to fd 2...
I1229 07:17:02.946960  768772 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:17:02.947454  768772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
I1229 07:17:02.948652  768772 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 07:17:02.948923  768772 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 07:17:02.949707  768772 cli_runner.go:164] Run: docker container inspect functional-098723 --format={{.State.Status}}
I1229 07:17:02.987157  768772 ssh_runner.go:195] Run: systemctl --version
I1229 07:17:02.987215  768772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-098723
I1229 07:17:03.016342  768772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/functional-098723/id_rsa Username:docker}
I1229 07:17:03.136768  768772 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
E1229 07:17:03.778884  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-098723 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3","docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"108362109"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18
eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4789170"},{"id":"f53ae4b20aa92e0af239e9e4477dc5e9215b55db4a75f649ff0118c17012bb36","repoDigests":["localhost/minikube-local-cache-test@sha256:1e75568009ec9893f223c78eb01e0a4f95ae8814452081700daa8d32d00cf3ac"],"repoTags":["localhost/minikube-local-cache-test:functional-09
8723"],"size":"3330"},{"id":"ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"49822549"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":["public.ecr.aws/nginx/nginx@sha256:7cf0c9cc3c6b7ce30b46f
a0fe53d95bee9d7803900edb965d3995ddf9ae12d03","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077764"},{"id":"c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"85015535"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bde
df","docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"42263767"},{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890","registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"60850387"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","reg
istry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e","gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager
@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503","registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"72170321"},{"id":"de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"74106775"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-098723 image ls --format json --alsologtostderr:
I1229 07:17:02.645052  768680 out.go:360] Setting OutFile to fd 1 ...
I1229 07:17:02.645262  768680 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:17:02.645295  768680 out.go:374] Setting ErrFile to fd 2...
I1229 07:17:02.645317  768680 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:17:02.645790  768680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
I1229 07:17:02.646618  768680 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 07:17:02.646929  768680 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 07:17:02.647598  768680 cli_runner.go:164] Run: docker container inspect functional-098723 --format={{.State.Status}}
I1229 07:17:02.676247  768680 ssh_runner.go:195] Run: systemctl --version
I1229 07:17:02.676314  768680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-098723
I1229 07:17:02.709823  768680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/functional-098723/id_rsa Username:docker}
I1229 07:17:02.829971  768680 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-098723 image ls --format yaml --alsologtostderr:
- id: c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "108362109"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
- registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "60850387"
- id: de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "74106775"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "42263767"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4789170"
- id: f53ae4b20aa92e0af239e9e4477dc5e9215b55db4a75f649ff0118c17012bb36
repoDigests:
- localhost/minikube-local-cache-test@sha256:1e75568009ec9893f223c78eb01e0a4f95ae8814452081700daa8d32d00cf3ac
repoTags:
- localhost/minikube-local-cache-test:functional-098723
size: "3330"
- id: 962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:7cf0c9cc3c6b7ce30b46fa0fe53d95bee9d7803900edb965d3995ddf9ae12d03
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077764"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "85015535"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "247562353"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "72170321"
- id: ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "49822549"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-098723 image ls --format yaml --alsologtostderr:
I1229 07:17:02.300199  768605 out.go:360] Setting OutFile to fd 1 ...
I1229 07:17:02.301245  768605 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:17:02.301262  768605 out.go:374] Setting ErrFile to fd 2...
I1229 07:17:02.303272  768605 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:17:02.303777  768605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
I1229 07:17:02.306724  768605 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 07:17:02.307017  768605 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 07:17:02.308645  768605 cli_runner.go:164] Run: docker container inspect functional-098723 --format={{.State.Status}}
I1229 07:17:02.335512  768605 ssh_runner.go:195] Run: systemctl --version
I1229 07:17:02.335587  768605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-098723
I1229 07:17:02.356004  768605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/functional-098723/id_rsa Username:docker}
I1229 07:17:02.476702  768605 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-098723 ssh pgrep buildkitd: exit status 1 (378.228267ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image build -t localhost/my-image:functional-098723 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-098723 image build -t localhost/my-image:functional-098723 testdata/build --alsologtostderr: (3.516296082s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-098723 image build -t localhost/my-image:functional-098723 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b7610ae3ea2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-098723
--> 7548eef15de
Successfully tagged localhost/my-image:functional-098723
7548eef15de06e3b6235ae1fc326b6aacf1977cd5cffac370e5e222bbb7c6224
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-098723 image build -t localhost/my-image:functional-098723 testdata/build --alsologtostderr:
I1229 07:17:02.889305  768758 out.go:360] Setting OutFile to fd 1 ...
I1229 07:17:02.890106  768758 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:17:02.890127  768758 out.go:374] Setting ErrFile to fd 2...
I1229 07:17:02.890133  768758 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 07:17:02.890509  768758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
I1229 07:17:02.891375  768758 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 07:17:02.892111  768758 config.go:182] Loaded profile config "functional-098723": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1229 07:17:02.892736  768758 cli_runner.go:164] Run: docker container inspect functional-098723 --format={{.State.Status}}
I1229 07:17:02.928142  768758 ssh_runner.go:195] Run: systemctl --version
I1229 07:17:02.928198  768758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-098723
I1229 07:17:02.972069  768758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/functional-098723/id_rsa Username:docker}
I1229 07:17:03.094074  768758 build_images.go:162] Building image from path: /tmp/build.4141247596.tar
I1229 07:17:03.094241  768758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1229 07:17:03.104451  768758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4141247596.tar
I1229 07:17:03.109140  768758 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4141247596.tar: stat -c "%s %y" /var/lib/minikube/build/build.4141247596.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4141247596.tar': No such file or directory
I1229 07:17:03.109180  768758 ssh_runner.go:362] scp /tmp/build.4141247596.tar --> /var/lib/minikube/build/build.4141247596.tar (3072 bytes)
I1229 07:17:03.139877  768758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4141247596
I1229 07:17:03.150013  768758 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4141247596 -xf /var/lib/minikube/build/build.4141247596.tar
I1229 07:17:03.180972  768758 crio.go:315] Building image: /var/lib/minikube/build/build.4141247596
I1229 07:17:03.181073  768758 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-098723 /var/lib/minikube/build/build.4141247596 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1229 07:17:06.299357  768758 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-098723 /var/lib/minikube/build/build.4141247596 --cgroup-manager=cgroupfs: (3.118252593s)
I1229 07:17:06.299426  768758 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4141247596
I1229 07:17:06.308280  768758 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4141247596.tar
I1229 07:17:06.317350  768758 build_images.go:218] Built localhost/my-image:functional-098723 from /tmp/build.4141247596.tar
I1229 07:17:06.317390  768758 build_images.go:134] succeeded building to: functional-098723
I1229 07:17:06.317397  768758 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
E1229 07:16:53.538558  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-098723 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723 --alsologtostderr: (2.041766509s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723 --alsologtostderr
2025/12/29 07:16:57 [DEBUG] GET http://127.0.0.1:36971/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-098723 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.77s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-098723
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-098723
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-098723
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (129.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1229 07:17:24.259851  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:18:05.220647  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m8.219248802s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (129.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 kubectl -- rollout status deployment/busybox: (5.251768036s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-9q9hr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-bj76k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-dbpm2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-9q9hr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-bj76k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-dbpm2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-9q9hr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-bj76k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-dbpm2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-9q9hr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-9q9hr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-bj76k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1229 07:19:27.141490  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-bj76k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-dbpm2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 kubectl -- exec busybox-769dd8b7dd-dbpm2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 node add --alsologtostderr -v 5: (30.565915106s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 status --alsologtostderr -v 5: (1.07697412s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (31.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-323288 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.602967749s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 status --output json --alsologtostderr -v 5: (1.091427804s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp testdata/cp-test.txt ha-323288:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile107464850/001/cp-test_ha-323288.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288:/home/docker/cp-test.txt ha-323288-m02:/home/docker/cp-test_ha-323288_ha-323288-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m02 "sudo cat /home/docker/cp-test_ha-323288_ha-323288-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288:/home/docker/cp-test.txt ha-323288-m03:/home/docker/cp-test_ha-323288_ha-323288-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m03 "sudo cat /home/docker/cp-test_ha-323288_ha-323288-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288:/home/docker/cp-test.txt ha-323288-m04:/home/docker/cp-test_ha-323288_ha-323288-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m04 "sudo cat /home/docker/cp-test_ha-323288_ha-323288-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp testdata/cp-test.txt ha-323288-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile107464850/001/cp-test_ha-323288-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288-m02:/home/docker/cp-test.txt ha-323288:/home/docker/cp-test_ha-323288-m02_ha-323288.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288 "sudo cat /home/docker/cp-test_ha-323288-m02_ha-323288.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288-m02:/home/docker/cp-test.txt ha-323288-m03:/home/docker/cp-test_ha-323288-m02_ha-323288-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m03 "sudo cat /home/docker/cp-test_ha-323288-m02_ha-323288-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288-m02:/home/docker/cp-test.txt ha-323288-m04:/home/docker/cp-test_ha-323288-m02_ha-323288-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m04 "sudo cat /home/docker/cp-test_ha-323288-m02_ha-323288-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp testdata/cp-test.txt ha-323288-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile107464850/001/cp-test_ha-323288-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288-m03:/home/docker/cp-test.txt ha-323288:/home/docker/cp-test_ha-323288-m03_ha-323288.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288 "sudo cat /home/docker/cp-test_ha-323288-m03_ha-323288.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288-m03:/home/docker/cp-test.txt ha-323288-m02:/home/docker/cp-test_ha-323288-m03_ha-323288-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m02 "sudo cat /home/docker/cp-test_ha-323288-m03_ha-323288-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288-m03:/home/docker/cp-test.txt ha-323288-m04:/home/docker/cp-test_ha-323288-m03_ha-323288-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m04 "sudo cat /home/docker/cp-test_ha-323288-m03_ha-323288-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp testdata/cp-test.txt ha-323288-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile107464850/001/cp-test_ha-323288-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288-m04:/home/docker/cp-test.txt ha-323288:/home/docker/cp-test_ha-323288-m04_ha-323288.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288 "sudo cat /home/docker/cp-test_ha-323288-m04_ha-323288.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288-m04:/home/docker/cp-test.txt ha-323288-m02:/home/docker/cp-test_ha-323288-m04_ha-323288-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m02 "sudo cat /home/docker/cp-test_ha-323288-m04_ha-323288-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 cp ha-323288-m04:/home/docker/cp-test.txt ha-323288-m03:/home/docker/cp-test_ha-323288-m04_ha-323288-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 ssh -n ha-323288-m03 "sudo cat /home/docker/cp-test_ha-323288-m04_ha-323288-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 node stop m02 --alsologtostderr -v 5: (12.104949s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-323288 status --alsologtostderr -v 5: exit status 7 (828.444459ms)

                                                
                                                
-- stdout --
	ha-323288
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-323288-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-323288-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-323288-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:20:33.547993  783691 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:20:33.548102  783691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:20:33.548112  783691 out.go:374] Setting ErrFile to fd 2...
	I1229 07:20:33.548118  783691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:20:33.548469  783691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:20:33.548685  783691 out.go:368] Setting JSON to false
	I1229 07:20:33.548716  783691 mustload.go:66] Loading cluster: ha-323288
	I1229 07:20:33.549201  783691 config.go:182] Loaded profile config "ha-323288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:20:33.549223  783691 status.go:174] checking status of ha-323288 ...
	I1229 07:20:33.549811  783691 notify.go:221] Checking for updates...
	I1229 07:20:33.550750  783691 cli_runner.go:164] Run: docker container inspect ha-323288 --format={{.State.Status}}
	I1229 07:20:33.574556  783691 status.go:371] ha-323288 host status = "Running" (err=<nil>)
	I1229 07:20:33.574584  783691 host.go:66] Checking if "ha-323288" exists ...
	I1229 07:20:33.574893  783691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-323288
	I1229 07:20:33.613195  783691 host.go:66] Checking if "ha-323288" exists ...
	I1229 07:20:33.613513  783691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:20:33.613571  783691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-323288
	I1229 07:20:33.634173  783691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33548 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/ha-323288/id_rsa Username:docker}
	I1229 07:20:33.742620  783691 ssh_runner.go:195] Run: systemctl --version
	I1229 07:20:33.749367  783691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:20:33.763431  783691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:20:33.834310  783691 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-29 07:20:33.82163302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:20:33.834910  783691 kubeconfig.go:125] found "ha-323288" server: "https://192.168.49.254:8443"
	I1229 07:20:33.834944  783691 api_server.go:166] Checking apiserver status ...
	I1229 07:20:33.834995  783691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:20:33.847549  783691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1228/cgroup
	I1229 07:20:33.856517  783691 api_server.go:192] apiserver freezer: "9:freezer:/docker/bdaa0bf6dff1a4b469f0bdca0f2918259ab7bdae926ebc5ad0b5c782e4908f41/crio/crio-06052587151b39d33c978cda4ca5340d79d0e01daa98854d092d14e7e36c970e"
	I1229 07:20:33.856587  783691 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bdaa0bf6dff1a4b469f0bdca0f2918259ab7bdae926ebc5ad0b5c782e4908f41/crio/crio-06052587151b39d33c978cda4ca5340d79d0e01daa98854d092d14e7e36c970e/freezer.state
	I1229 07:20:33.865268  783691 api_server.go:214] freezer state: "THAWED"
	I1229 07:20:33.865298  783691 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1229 07:20:33.873852  783691 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1229 07:20:33.873880  783691 status.go:463] ha-323288 apiserver status = Running (err=<nil>)
	I1229 07:20:33.873890  783691 status.go:176] ha-323288 status: &{Name:ha-323288 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:20:33.873907  783691 status.go:174] checking status of ha-323288-m02 ...
	I1229 07:20:33.874217  783691 cli_runner.go:164] Run: docker container inspect ha-323288-m02 --format={{.State.Status}}
	I1229 07:20:33.892418  783691 status.go:371] ha-323288-m02 host status = "Stopped" (err=<nil>)
	I1229 07:20:33.892441  783691 status.go:384] host is not running, skipping remaining checks
	I1229 07:20:33.892450  783691 status.go:176] ha-323288-m02 status: &{Name:ha-323288-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:20:33.892469  783691 status.go:174] checking status of ha-323288-m03 ...
	I1229 07:20:33.892834  783691 cli_runner.go:164] Run: docker container inspect ha-323288-m03 --format={{.State.Status}}
	I1229 07:20:33.910287  783691 status.go:371] ha-323288-m03 host status = "Running" (err=<nil>)
	I1229 07:20:33.910313  783691 host.go:66] Checking if "ha-323288-m03" exists ...
	I1229 07:20:33.910611  783691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-323288-m03
	I1229 07:20:33.932103  783691 host.go:66] Checking if "ha-323288-m03" exists ...
	I1229 07:20:33.932411  783691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:20:33.932458  783691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-323288-m03
	I1229 07:20:33.949973  783691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/ha-323288-m03/id_rsa Username:docker}
	I1229 07:20:34.058772  783691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:20:34.073776  783691 kubeconfig.go:125] found "ha-323288" server: "https://192.168.49.254:8443"
	I1229 07:20:34.073852  783691 api_server.go:166] Checking apiserver status ...
	I1229 07:20:34.073901  783691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:20:34.093630  783691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	I1229 07:20:34.107150  783691 api_server.go:192] apiserver freezer: "9:freezer:/docker/a4201a0f14c3cee75e2f56ee3bb6bc8d4b1511a8455d899f7bd372cdc098cdc5/crio/crio-1151d2925d082709bd7e5a6381381e1e966b0e5db9962288116180a35626f3b5"
	I1229 07:20:34.107339  783691 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a4201a0f14c3cee75e2f56ee3bb6bc8d4b1511a8455d899f7bd372cdc098cdc5/crio/crio-1151d2925d082709bd7e5a6381381e1e966b0e5db9962288116180a35626f3b5/freezer.state
	I1229 07:20:34.120796  783691 api_server.go:214] freezer state: "THAWED"
	I1229 07:20:34.120880  783691 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1229 07:20:34.129793  783691 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1229 07:20:34.129825  783691 status.go:463] ha-323288-m03 apiserver status = Running (err=<nil>)
	I1229 07:20:34.129836  783691 status.go:176] ha-323288-m03 status: &{Name:ha-323288-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:20:34.129864  783691 status.go:174] checking status of ha-323288-m04 ...
	I1229 07:20:34.130247  783691 cli_runner.go:164] Run: docker container inspect ha-323288-m04 --format={{.State.Status}}
	I1229 07:20:34.149408  783691 status.go:371] ha-323288-m04 host status = "Running" (err=<nil>)
	I1229 07:20:34.149435  783691 host.go:66] Checking if "ha-323288-m04" exists ...
	I1229 07:20:34.149757  783691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-323288-m04
	I1229 07:20:34.168246  783691 host.go:66] Checking if "ha-323288-m04" exists ...
	I1229 07:20:34.168563  783691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:20:34.168614  783691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-323288-m04
	I1229 07:20:34.187273  783691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33563 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/ha-323288-m04/id_rsa Username:docker}
	I1229 07:20:34.294485  783691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:20:34.313573  783691 status.go:176] ha-323288-m04 status: &{Name:ha-323288-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 node start m02 --alsologtostderr -v 5: (19.646835947s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 status --alsologtostderr -v 5: (1.304618222s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.324195294s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (115.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 stop --alsologtostderr -v 5
E1229 07:21:17.352958  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:21:17.359142  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:21:17.369440  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:21:17.389702  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:21:17.429964  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:21:17.510267  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:21:17.670636  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:21:17.991174  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:21:18.632070  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:21:19.912540  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:21:22.474401  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:21:27.595090  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 stop --alsologtostderr -v 5: (37.699086992s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 start --wait true --alsologtostderr -v 5
E1229 07:21:37.835365  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:21:43.291459  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:21:58.315635  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:22:10.982557  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:22:39.276501  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 start --wait true --alsologtostderr -v 5: (1m17.963691541s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (115.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 node delete m03 --alsologtostderr -v 5: (10.994224454s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 stop --alsologtostderr -v 5: (36.037497451s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-323288 status --alsologtostderr -v 5: exit status 7 (113.991905ms)

                                                
                                                
-- stdout --
	ha-323288
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-323288-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-323288-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:23:42.375930  795713 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:23:42.376050  795713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:23:42.376061  795713 out.go:374] Setting ErrFile to fd 2...
	I1229 07:23:42.376066  795713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:23:42.376326  795713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:23:42.376522  795713 out.go:368] Setting JSON to false
	I1229 07:23:42.376554  795713 mustload.go:66] Loading cluster: ha-323288
	I1229 07:23:42.376652  795713 notify.go:221] Checking for updates...
	I1229 07:23:42.377046  795713 config.go:182] Loaded profile config "ha-323288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:23:42.377068  795713 status.go:174] checking status of ha-323288 ...
	I1229 07:23:42.377887  795713 cli_runner.go:164] Run: docker container inspect ha-323288 --format={{.State.Status}}
	I1229 07:23:42.395673  795713 status.go:371] ha-323288 host status = "Stopped" (err=<nil>)
	I1229 07:23:42.395698  795713 status.go:384] host is not running, skipping remaining checks
	I1229 07:23:42.395704  795713 status.go:176] ha-323288 status: &{Name:ha-323288 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:23:42.395740  795713 status.go:174] checking status of ha-323288-m02 ...
	I1229 07:23:42.396051  795713 cli_runner.go:164] Run: docker container inspect ha-323288-m02 --format={{.State.Status}}
	I1229 07:23:42.419726  795713 status.go:371] ha-323288-m02 host status = "Stopped" (err=<nil>)
	I1229 07:23:42.419752  795713 status.go:384] host is not running, skipping remaining checks
	I1229 07:23:42.419758  795713 status.go:176] ha-323288-m02 status: &{Name:ha-323288-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:23:42.419778  795713 status.go:174] checking status of ha-323288-m04 ...
	I1229 07:23:42.420067  795713 cli_runner.go:164] Run: docker container inspect ha-323288-m04 --format={{.State.Status}}
	I1229 07:23:42.441603  795713 status.go:371] ha-323288-m04 host status = "Stopped" (err=<nil>)
	I1229 07:23:42.441625  795713 status.go:384] host is not running, skipping remaining checks
	I1229 07:23:42.441632  795713 status.go:176] ha-323288-m04 status: &{Name:ha-323288-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (71.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1229 07:24:01.197127  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m10.100257621s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (71.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (51.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 node add --control-plane --alsologtostderr -v 5: (49.930107407s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-323288 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-323288 status --alsologtostderr -v 5: (1.190456158s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (51.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.077689477s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-735012 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1229 07:26:17.352881  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-735012 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (45.306220143s)
--- PASS: TestJSONOutput/start/Command (45.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-735012 --output=json --user=testUser
E1229 07:26:45.037477  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-735012 --output=json --user=testUser: (5.816254349s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-825071 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-825071 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (87.415827ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"35974635-0ee5-4487-a243-89a86df01e21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-825071] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"52ed034c-5cd3-4c8e-824c-790c2a47a9c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22353"}}
	{"specversion":"1.0","id":"2c4f00c1-367d-41db-8991-4e937bc8122c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"488b7e4f-5536-4ab7-bf6d-644b507da190","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig"}}
	{"specversion":"1.0","id":"2e22f6ee-3df9-42c8-b5f3-8b77cdfbf298","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube"}}
	{"specversion":"1.0","id":"9faedde7-a614-4d71-b5f2-9d4ff83fdbdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"14bb0db8-d9f8-4b0d-967f-3f8ec4e5f881","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"479270ba-f4a0-45f3-a55a-9a534a63ca68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-825071" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-825071
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-841309 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-841309 --network=: (32.340117971s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-841309" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-841309
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-841309: (2.194490168s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.56s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.83s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-823680 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-823680 --network=bridge: (27.614242445s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-823680" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-823680
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-823680: (2.189750063s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.83s)

                                                
                                    
x
+
TestKicExistingNetwork (29.31s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1229 07:27:59.262538  741854 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1229 07:27:59.278374  741854 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1229 07:27:59.279536  741854 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1229 07:27:59.279576  741854 cli_runner.go:164] Run: docker network inspect existing-network
W1229 07:27:59.294678  741854 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1229 07:27:59.294709  741854 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1229 07:27:59.294725  741854 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1229 07:27:59.294826  741854 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1229 07:27:59.312602  741854 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7c63605c0365 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:f9:85:71:b4:78} reservation:<nil>}
I1229 07:27:59.313105  741854 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001cc32f0}
I1229 07:27:59.313137  741854 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1229 07:27:59.313186  741854 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1229 07:27:59.372409  741854 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-431289 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-431289 --network=existing-network: (27.09704207s)
helpers_test.go:176: Cleaning up "existing-network-431289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-431289
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-431289: (2.071715579s)
I1229 07:28:28.557539  741854 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (29.31s)

                                                
                                    
x
+
TestKicCustomSubnet (30.89s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-719160 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-719160 --subnet=192.168.60.0/24: (28.643672762s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-719160 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-719160" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-719160
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-719160: (2.208780236s)
--- PASS: TestKicCustomSubnet (30.89s)

                                                
                                    
x
+
TestKicStaticIP (30.35s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-275564 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-275564 --static-ip=192.168.200.200: (28.015395455s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-275564 ip
helpers_test.go:176: Cleaning up "static-ip-275564" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-275564
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-275564: (2.180655952s)
--- PASS: TestKicStaticIP (30.35s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (62.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-161056 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-161056 --driver=docker  --container-runtime=crio: (27.062355546s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-163369 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-163369 --driver=docker  --container-runtime=crio: (29.815063694s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-161056
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-163369
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-163369" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-163369
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-163369: (2.15044747s)
helpers_test.go:176: Cleaning up "first-161056" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-161056
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-161056: (2.403979373s)
--- PASS: TestMinikubeProfile (62.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-615474 --memory=3072 --mount-string /tmp/TestMountStartserial3946375044/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-615474 --memory=3072 --mount-string /tmp/TestMountStartserial3946375044/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.879723909s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-615474 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-617246 --memory=3072 --mount-string /tmp/TestMountStartserial3946375044/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-617246 --memory=3072 --mount-string /tmp/TestMountStartserial3946375044/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.828390663s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-617246 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-615474 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-615474 --alsologtostderr -v=5: (1.712199093s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-617246 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-617246
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-617246: (1.284022244s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-617246
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-617246: (7.118757358s)
--- PASS: TestMountStart/serial/RestartStopped (8.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-617246 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (74.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-272744 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1229 07:31:17.350650  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:31:43.291532  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-272744 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m13.881057745s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (74.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-272744 -- rollout status deployment/busybox: (3.201799673s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- exec busybox-769dd8b7dd-4qcsq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- exec busybox-769dd8b7dd-xmzr9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- exec busybox-769dd8b7dd-4qcsq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- exec busybox-769dd8b7dd-xmzr9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- exec busybox-769dd8b7dd-4qcsq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- exec busybox-769dd8b7dd-xmzr9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- exec busybox-769dd8b7dd-4qcsq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- exec busybox-769dd8b7dd-4qcsq -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- exec busybox-769dd8b7dd-xmzr9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-272744 -- exec busybox-769dd8b7dd-xmzr9 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-272744 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-272744 -v=5 --alsologtostderr: (28.188803779s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.90s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-272744 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 cp testdata/cp-test.txt multinode-272744:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 cp multinode-272744:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3778456345/001/cp-test_multinode-272744.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 cp multinode-272744:/home/docker/cp-test.txt multinode-272744-m02:/home/docker/cp-test_multinode-272744_multinode-272744-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744-m02 "sudo cat /home/docker/cp-test_multinode-272744_multinode-272744-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 cp multinode-272744:/home/docker/cp-test.txt multinode-272744-m03:/home/docker/cp-test_multinode-272744_multinode-272744-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744-m03 "sudo cat /home/docker/cp-test_multinode-272744_multinode-272744-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 cp testdata/cp-test.txt multinode-272744-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 cp multinode-272744-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3778456345/001/cp-test_multinode-272744-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 cp multinode-272744-m02:/home/docker/cp-test.txt multinode-272744:/home/docker/cp-test_multinode-272744-m02_multinode-272744.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744 "sudo cat /home/docker/cp-test_multinode-272744-m02_multinode-272744.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 cp multinode-272744-m02:/home/docker/cp-test.txt multinode-272744-m03:/home/docker/cp-test_multinode-272744-m02_multinode-272744-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744-m03 "sudo cat /home/docker/cp-test_multinode-272744-m02_multinode-272744-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 cp testdata/cp-test.txt multinode-272744-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 cp multinode-272744-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3778456345/001/cp-test_multinode-272744-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 cp multinode-272744-m03:/home/docker/cp-test.txt multinode-272744:/home/docker/cp-test_multinode-272744-m03_multinode-272744.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744 "sudo cat /home/docker/cp-test_multinode-272744-m03_multinode-272744.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 cp multinode-272744-m03:/home/docker/cp-test.txt multinode-272744-m02:/home/docker/cp-test_multinode-272744-m03_multinode-272744-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 ssh -n multinode-272744-m02 "sudo cat /home/docker/cp-test_multinode-272744-m03_multinode-272744-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 node stop m03
E1229 07:33:06.343385  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-272744 node stop m03: (1.311750237s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-272744 status: exit status 7 (531.486258ms)

                                                
                                                
-- stdout --
	multinode-272744
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-272744-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-272744-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-272744 status --alsologtostderr: exit status 7 (538.526923ms)

                                                
                                                
-- stdout --
	multinode-272744
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-272744-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-272744-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:33:07.214160  846217 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:33:07.214331  846217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:33:07.214353  846217 out.go:374] Setting ErrFile to fd 2...
	I1229 07:33:07.214380  846217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:33:07.214932  846217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:33:07.215176  846217 out.go:368] Setting JSON to false
	I1229 07:33:07.215245  846217 mustload.go:66] Loading cluster: multinode-272744
	I1229 07:33:07.215339  846217 notify.go:221] Checking for updates...
	I1229 07:33:07.215704  846217 config.go:182] Loaded profile config "multinode-272744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:33:07.215744  846217 status.go:174] checking status of multinode-272744 ...
	I1229 07:33:07.216255  846217 cli_runner.go:164] Run: docker container inspect multinode-272744 --format={{.State.Status}}
	I1229 07:33:07.242464  846217 status.go:371] multinode-272744 host status = "Running" (err=<nil>)
	I1229 07:33:07.242487  846217 host.go:66] Checking if "multinode-272744" exists ...
	I1229 07:33:07.242810  846217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-272744
	I1229 07:33:07.272665  846217 host.go:66] Checking if "multinode-272744" exists ...
	I1229 07:33:07.273010  846217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:33:07.273071  846217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-272744
	I1229 07:33:07.290735  846217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33668 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/multinode-272744/id_rsa Username:docker}
	I1229 07:33:07.394432  846217 ssh_runner.go:195] Run: systemctl --version
	I1229 07:33:07.401234  846217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:33:07.414845  846217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:33:07.472740  846217 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-29 07:33:07.462794097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:33:07.473328  846217 kubeconfig.go:125] found "multinode-272744" server: "https://192.168.67.2:8443"
	I1229 07:33:07.473361  846217 api_server.go:166] Checking apiserver status ...
	I1229 07:33:07.473407  846217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:33:07.484670  846217 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1238/cgroup
	I1229 07:33:07.493016  846217 api_server.go:192] apiserver freezer: "9:freezer:/docker/df309dc421c7d01a5981983e283228b4ffb7e9f85301c1f643cbbf3ffe69635c/crio/crio-ac97ecea7fae9e4347586215d66d7d632171ee4de3720bd9a39d58b622245871"
	I1229 07:33:07.493082  846217 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/df309dc421c7d01a5981983e283228b4ffb7e9f85301c1f643cbbf3ffe69635c/crio/crio-ac97ecea7fae9e4347586215d66d7d632171ee4de3720bd9a39d58b622245871/freezer.state
	I1229 07:33:07.500468  846217 api_server.go:214] freezer state: "THAWED"
	I1229 07:33:07.500499  846217 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1229 07:33:07.508609  846217 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1229 07:33:07.508637  846217 status.go:463] multinode-272744 apiserver status = Running (err=<nil>)
	I1229 07:33:07.508648  846217 status.go:176] multinode-272744 status: &{Name:multinode-272744 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:33:07.508665  846217 status.go:174] checking status of multinode-272744-m02 ...
	I1229 07:33:07.509007  846217 cli_runner.go:164] Run: docker container inspect multinode-272744-m02 --format={{.State.Status}}
	I1229 07:33:07.526298  846217 status.go:371] multinode-272744-m02 host status = "Running" (err=<nil>)
	I1229 07:33:07.526322  846217 host.go:66] Checking if "multinode-272744-m02" exists ...
	I1229 07:33:07.526636  846217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-272744-m02
	I1229 07:33:07.543438  846217 host.go:66] Checking if "multinode-272744-m02" exists ...
	I1229 07:33:07.543765  846217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:33:07.543814  846217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-272744-m02
	I1229 07:33:07.560885  846217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33673 SSHKeyPath:/home/jenkins/minikube-integration/22353-739997/.minikube/machines/multinode-272744-m02/id_rsa Username:docker}
	I1229 07:33:07.666099  846217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:33:07.678985  846217 status.go:176] multinode-272744-m02 status: &{Name:multinode-272744-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:33:07.679017  846217 status.go:174] checking status of multinode-272744-m03 ...
	I1229 07:33:07.679316  846217 cli_runner.go:164] Run: docker container inspect multinode-272744-m03 --format={{.State.Status}}
	I1229 07:33:07.696230  846217 status.go:371] multinode-272744-m03 host status = "Stopped" (err=<nil>)
	I1229 07:33:07.696255  846217 status.go:384] host is not running, skipping remaining checks
	I1229 07:33:07.696274  846217 status.go:176] multinode-272744-m03 status: &{Name:multinode-272744-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-272744 node start m03 -v=5 --alsologtostderr: (7.418455212s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (71.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-272744
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-272744
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-272744: (25.140372833s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-272744 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-272744 --wait=true -v=5 --alsologtostderr: (46.493127634s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-272744
--- PASS: TestMultiNode/serial/RestartKeepsNodes (71.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-272744 node delete m03: (4.779840895s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-272744 stop: (23.901415305s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-272744 status: exit status 7 (97.477331ms)

                                                
                                                
-- stdout --
	multinode-272744
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-272744-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-272744 status --alsologtostderr: exit status 7 (99.922939ms)

                                                
                                                
-- stdout --
	multinode-272744
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-272744-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:34:57.239575  854078 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:34:57.240051  854078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:34:57.240088  854078 out.go:374] Setting ErrFile to fd 2...
	I1229 07:34:57.240108  854078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:34:57.240428  854078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:34:57.240665  854078 out.go:368] Setting JSON to false
	I1229 07:34:57.240723  854078 mustload.go:66] Loading cluster: multinode-272744
	I1229 07:34:57.241184  854078 config.go:182] Loaded profile config "multinode-272744": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:34:57.241281  854078 status.go:174] checking status of multinode-272744 ...
	I1229 07:34:57.241844  854078 cli_runner.go:164] Run: docker container inspect multinode-272744 --format={{.State.Status}}
	I1229 07:34:57.241255  854078 notify.go:221] Checking for updates...
	I1229 07:34:57.264897  854078 status.go:371] multinode-272744 host status = "Stopped" (err=<nil>)
	I1229 07:34:57.264921  854078 status.go:384] host is not running, skipping remaining checks
	I1229 07:34:57.264928  854078 status.go:176] multinode-272744 status: &{Name:multinode-272744 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:34:57.264959  854078 status.go:174] checking status of multinode-272744-m02 ...
	I1229 07:34:57.265266  854078 cli_runner.go:164] Run: docker container inspect multinode-272744-m02 --format={{.State.Status}}
	I1229 07:34:57.282567  854078 status.go:371] multinode-272744-m02 host status = "Stopped" (err=<nil>)
	I1229 07:34:57.282593  854078 status.go:384] host is not running, skipping remaining checks
	I1229 07:34:57.282600  854078 status.go:176] multinode-272744-m02 status: &{Name:multinode-272744-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-272744 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-272744 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.480759951s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-272744 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.20s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (29.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-272744
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-272744-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-272744-m02 --driver=docker  --container-runtime=crio: exit status 14 (92.52365ms)

                                                
                                                
-- stdout --
	* [multinode-272744-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-272744-m02' is duplicated with machine name 'multinode-272744-m02' in profile 'multinode-272744'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-272744-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-272744-m03 --driver=docker  --container-runtime=crio: (27.452239883s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-272744
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-272744: exit status 80 (338.035005ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-272744 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-272744-m03 already exists in multinode-272744-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-272744-m03
E1229 07:36:17.350063  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-272744-m03: (2.040624418s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (29.98s)

                                                
                                    
x
+
TestScheduledStopUnix (102.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-301780 --memory=3072 --driver=docker  --container-runtime=crio
E1229 07:36:43.293890  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-301780 --memory=3072 --driver=docker  --container-runtime=crio: (26.198009298s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-301780 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:36:49.093638  862529 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:36:49.093835  862529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:36:49.093862  862529 out.go:374] Setting ErrFile to fd 2...
	I1229 07:36:49.093882  862529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:36:49.094310  862529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:36:49.094709  862529 out.go:368] Setting JSON to false
	I1229 07:36:49.094884  862529 mustload.go:66] Loading cluster: scheduled-stop-301780
	I1229 07:36:49.095548  862529 config.go:182] Loaded profile config "scheduled-stop-301780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:36:49.095689  862529 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/scheduled-stop-301780/config.json ...
	I1229 07:36:49.095923  862529 mustload.go:66] Loading cluster: scheduled-stop-301780
	I1229 07:36:49.096111  862529 config.go:182] Loaded profile config "scheduled-stop-301780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-301780 -n scheduled-stop-301780
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:36:49.557042  862615 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:36:49.557262  862615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:36:49.557294  862615 out.go:374] Setting ErrFile to fd 2...
	I1229 07:36:49.557314  862615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:36:49.557588  862615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:36:49.557891  862615 out.go:368] Setting JSON to false
	I1229 07:36:49.558915  862615 daemonize_unix.go:73] killing process 862548 as it is an old scheduled stop
	I1229 07:36:49.559079  862615 mustload.go:66] Loading cluster: scheduled-stop-301780
	I1229 07:36:49.559478  862615 config.go:182] Loaded profile config "scheduled-stop-301780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:36:49.559586  862615 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/scheduled-stop-301780/config.json ...
	I1229 07:36:49.559842  862615 mustload.go:66] Loading cluster: scheduled-stop-301780
	I1229 07:36:49.559994  862615 config.go:182] Loaded profile config "scheduled-stop-301780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1229 07:36:49.567217  741854 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/scheduled-stop-301780/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-301780 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-301780 -n scheduled-stop-301780
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-301780
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-301780 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:37:15.489132  863097 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:37:15.489339  863097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:37:15.489375  863097 out.go:374] Setting ErrFile to fd 2...
	I1229 07:37:15.489395  863097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:37:15.489809  863097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:37:15.490190  863097 out.go:368] Setting JSON to false
	I1229 07:37:15.490348  863097 mustload.go:66] Loading cluster: scheduled-stop-301780
	I1229 07:37:15.490985  863097 config.go:182] Loaded profile config "scheduled-stop-301780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:37:15.491110  863097 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/scheduled-stop-301780/config.json ...
	I1229 07:37:15.491345  863097 mustload.go:66] Loading cluster: scheduled-stop-301780
	I1229 07:37:15.491535  863097 config.go:182] Loaded profile config "scheduled-stop-301780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1229 07:37:40.400629  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-301780
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-301780: exit status 7 (71.325438ms)

                                                
                                                
-- stdout --
	scheduled-stop-301780
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-301780 -n scheduled-stop-301780
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-301780 -n scheduled-stop-301780: exit status 7 (65.114767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-301780" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-301780
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-301780: (4.727126913s)
--- PASS: TestScheduledStopUnix (102.52s)

                                                
                                    
x
+
TestInsufficientStorage (12.69s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-190289 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-190289 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.118017106s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"18c21287-ffc5-4840-b026-660116b58783","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-190289] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6092adf4-b495-4c1d-b456-2763e88df9df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22353"}}
	{"specversion":"1.0","id":"dab881f4-5445-473c-aa69-77b533f9070d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"75e60aa6-9895-4e47-83ea-81625b7ca187","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig"}}
	{"specversion":"1.0","id":"0dda589f-f3d4-4f78-9ce2-1175cfdf6b7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube"}}
	{"specversion":"1.0","id":"de730117-34ed-4511-9edb-8f0499cbe97e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2045112d-c5cb-4043-bab5-417b1176d29d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f1c9ad2-d389-4ab8-a963-86d51e9f85f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"313308af-3ad4-4504-81fa-34cfd7df8ef7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"44c4eb04-db86-4f5b-909d-e6f4a7446381","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa68cdde-291f-44f9-b135-753c9f53cd98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"19a42a37-ed3a-4b84-b21e-b28b58e38549","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-190289\" primary control-plane node in \"insufficient-storage-190289\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"523fb55e-8ffd-499d-afaa-379badc73116","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766979815-22353 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2aab7f25-4fcd-4c1b-aa7e-0afcf99917d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f7c6c142-c7b4-4b67-a5f8-e6fba9e1be5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-190289 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-190289 --output=json --layout=cluster: exit status 7 (295.229465ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-190289","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-190289","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:38:15.760913  864958 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-190289" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-190289 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-190289 --output=json --layout=cluster: exit status 7 (292.625095ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-190289","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-190289","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:38:16.053084  865024 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-190289" does not appear in /home/jenkins/minikube-integration/22353-739997/kubeconfig
	E1229 07:38:16.063497  865024 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/insufficient-storage-190289/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-190289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-190289
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-190289: (1.984756967s)
--- PASS: TestInsufficientStorage (12.69s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (63.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.786365012 start -p running-upgrade-413378 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1229 07:41:43.293705  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.786365012 start -p running-upgrade-413378 --memory=3072 --vm-driver=docker  --container-runtime=crio: (37.55427241s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-413378 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-413378 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.453685469s)
helpers_test.go:176: Cleaning up "running-upgrade-413378" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-413378
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-413378: (2.976581135s)
--- PASS: TestRunningBinaryUpgrade (63.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (135.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-474794 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-474794 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.087499338s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-474794 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-474794 --alsologtostderr: (1.555916481s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-474794 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-474794 status --format={{.Host}}: exit status 7 (147.840704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-474794 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-474794 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.914379012s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-474794 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-474794 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-474794 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (251.50479ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-474794] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-474794
	    minikube start -p kubernetes-upgrade-474794 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4747942 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-474794 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-474794 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-474794 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.424856839s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-474794" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-474794
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-474794: (2.449488202s)
--- PASS: TestKubernetesUpgrade (135.10s)

                                                
                                    
x
+
TestMissingContainerUpgrade (117.42s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2666211793 start -p missing-upgrade-865040 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2666211793 start -p missing-upgrade-865040 --memory=3072 --driver=docker  --container-runtime=crio: (1m1.62373302s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-865040
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-865040
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-865040 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-865040 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.829000484s)
helpers_test.go:176: Cleaning up "missing-upgrade-865040" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-865040
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-865040: (4.357542465s)
--- PASS: TestMissingContainerUpgrade (117.42s)

                                                
                                    
x
+
TestPause/serial/Start (58.01s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-199444 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-199444 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (58.00642427s)
--- PASS: TestPause/serial/Start (58.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.28s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-199444 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-199444 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.23784088s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (70.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2889504229 start -p stopped-upgrade-386518 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2889504229 start -p stopped-upgrade-386518 --memory=3072 --vm-driver=docker  --container-runtime=crio: (41.286987109s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2889504229 -p stopped-upgrade-386518 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2889504229 -p stopped-upgrade-386518 stop: (1.308885397s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-386518 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1229 07:41:17.350685  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-386518 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.327396702s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (70.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-386518
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-386518: (2.64570457s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.65s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (70.41s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-759848 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-759848 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m3.69928566s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-759848 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-759848
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-759848: (5.930023371s)
--- PASS: TestPreload/Start-NoPreload-PullImage (70.41s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (48.36s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-759848 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-759848 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (48.113258292s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-759848 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (48.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-611743 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-611743 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (107.96885ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-611743] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-611743 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-611743 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.453531289s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-611743 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-611743 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-611743 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.608041776s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-611743 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-611743 status -o json: exit status 2 (317.759726ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-611743","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-611743
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-611743: (2.000170381s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-611743 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-611743 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.249764065s)
--- PASS: TestNoKubernetes/serial/Start (8.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22353-739997/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-611743 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-611743 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.123819ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-611743
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-611743: (1.357651445s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-611743 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-611743 --driver=docker  --container-runtime=crio: (7.442480978s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-611743 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-611743 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.769853ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-095300 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-095300 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (213.749016ms)

                                                
                                                
-- stdout --
	* [false-095300] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:45:41.608397  906638 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:45:41.608635  906638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:45:41.608663  906638 out.go:374] Setting ErrFile to fd 2...
	I1229 07:45:41.608684  906638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:45:41.608995  906638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-739997/.minikube/bin
	I1229 07:45:41.609465  906638 out.go:368] Setting JSON to false
	I1229 07:45:41.610360  906638 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16093,"bootTime":1766978248,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1229 07:45:41.610462  906638 start.go:143] virtualization:  
	I1229 07:45:41.614101  906638 out.go:179] * [false-095300] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:45:41.617311  906638 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:45:41.617384  906638 notify.go:221] Checking for updates...
	I1229 07:45:41.620928  906638 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:45:41.623876  906638 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-739997/kubeconfig
	I1229 07:45:41.626874  906638 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-739997/.minikube
	I1229 07:45:41.629887  906638 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:45:41.632902  906638 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:45:41.636401  906638 config.go:182] Loaded profile config "force-systemd-env-002562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1229 07:45:41.636532  906638 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:45:41.663174  906638 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:45:41.663299  906638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:45:41.758143  906638 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:45:41.746425457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:45:41.758255  906638 docker.go:319] overlay module found
	I1229 07:45:41.761237  906638 out.go:179] * Using the docker driver based on user configuration
	I1229 07:45:41.764125  906638 start.go:309] selected driver: docker
	I1229 07:45:41.764139  906638 start.go:928] validating driver "docker" against <nil>
	I1229 07:45:41.764153  906638 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:45:41.767737  906638 out.go:203] 
	W1229 07:45:41.770665  906638 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1229 07:45:41.773444  906638 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-095300 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-095300

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-095300

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-095300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-095300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-095300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-095300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-095300

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-095300

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-095300

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-095300

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-095300

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-095300" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-095300" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-095300

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-095300"

                                                
                                                
----------------------- debugLogs end: false-095300 [took: 3.301394364s] --------------------------------
helpers_test.go:176: Cleaning up "false-095300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-095300
--- PASS: TestNetworkPlugins/group/false (3.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1229 07:51:43.291359  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.654405459s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-512867 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [76520809-8782-4067-b741-af44649b8703] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [76520809-8782-4067-b741-af44649b8703] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003093983s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-512867 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-512867 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-512867 --alsologtostderr -v=3: (12.026381476s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-512867 -n old-k8s-version-512867
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-512867 -n old-k8s-version-512867: exit status 7 (78.203777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-512867 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-512867 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.300188918s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-512867 -n old-k8s-version-512867
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-cqflv" [22312289-ae02-4dfe-853e-a5e86e53d038] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003565331s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-cqflv" [22312289-ae02-4dfe-853e-a5e86e53d038] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003310679s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-512867 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-512867 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1229 07:54:20.400914  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (52.728683798s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-822299 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [65d3fc57-2c89-4df3-a2fb-980d8699e1c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [65d3fc57-2c89-4df3-a2fb-980d8699e1c9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004340435s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-822299 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-822299 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-822299 --alsologtostderr -v=3: (12.011871772s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-822299 -n no-preload-822299
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-822299 -n no-preload-822299: exit status 7 (81.49015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-822299 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-822299 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (47.024225184s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-822299 -n no-preload-822299
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-w5gsg" [ec2d9d1a-9700-4fc2-8e3d-a6b1dc47ea6b] Running
E1229 07:56:17.350600  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0032153s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-w5gsg" [ec2d9d1a-9700-4fc2-8e3d-a6b1dc47ea6b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003317102s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-822299 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-822299 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1229 07:56:43.291554  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (45.919458592s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-664524 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d756b8ef-3530-493d-bf60-23520e77c33c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d756b8ef-3530-493d-bf60-23520e77c33c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003473306s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-664524 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-664524 --alsologtostderr -v=3
E1229 07:57:38.317656  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:38.323027  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:38.333396  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:38.353805  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:38.394149  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:38.474538  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:38.635033  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:38.955622  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:39.596542  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:40.877617  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:43.438855  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-664524 --alsologtostderr -v=3: (12.008533565s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-664524 -n embed-certs-664524
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-664524 -n embed-certs-664524: exit status 7 (139.56751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-664524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1229 07:57:48.559750  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-664524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (48.021833759s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-664524 -n embed-certs-664524
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1229 07:58:19.280878  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (50.685015025s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-brfwf" [f6b076e3-ede8-4f23-bc12-4bb5acdbffd4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.014530726s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-brfwf" [f6b076e3-ede8-4f23-bc12-4bb5acdbffd4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013481827s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-664524 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-664524 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-358521 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e1c7f8ff-3089-47c4-82fd-1061811c1b63] Pending
helpers_test.go:353: "busybox" [e1c7f8ff-3089-47c4-82fd-1061811c1b63] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [e1c7f8ff-3089-47c4-82fd-1061811c1b63] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004730841s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-358521 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (33.333756151s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-358521 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-358521 --alsologtostderr -v=3: (12.235390664s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-358521 -n default-k8s-diff-port-358521
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-358521 -n default-k8s-diff-port-358521: exit status 7 (153.802899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-358521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-358521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (54.157624987s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-358521 -n default-k8s-diff-port-358521
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-921639 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-921639 --alsologtostderr -v=3: (1.778756254s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-921639 -n newest-cni-921639
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-921639 -n newest-cni-921639: exit status 7 (80.526928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-921639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-921639 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (16.696253982s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-921639 -n newest-cni-921639
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-921639 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.13s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-273719 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
E1229 08:00:04.077264  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:00:04.082681  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:00:04.093026  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:00:04.113337  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:00:04.153828  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:00:04.234743  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:00:04.395158  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:00:04.715371  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:00:05.356375  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-273719 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (3.941560443s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-273719" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-273719
--- PASS: TestPreload/PreloadSrc/gcs (4.13s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (11.7s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-158002 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
E1229 08:00:06.637455  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:00:09.198677  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-158002 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (11.501775471s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-158002" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-158002
--- PASS: TestPreload/PreloadSrc/github (11.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jn77m" [021d3321-f34e-420b-974c-ab81771e3922] Running
E1229 08:00:14.319473  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004854501s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jn77m" [021d3321-f34e-420b-974c-ab81771e3922] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003247633s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-358521 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.49s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-586553 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-586553" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-586553
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (64.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m4.29581966s)
--- PASS: TestNetworkPlugins/group/auto/Start (64.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-358521 image list --format=json
E1229 08:00:22.162774  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (54.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1229 08:00:45.040207  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:01:17.350854  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/functional-098723/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (54.667786685s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (54.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-095300 "pgrep -a kubelet"
I1229 08:01:22.755430  741854 config.go:182] Loaded profile config "auto-095300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-095300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-7s56c" [628eac03-fdee-4a3a-bbae-3c206f9d48cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1229 08:01:26.000416  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-7s56c" [628eac03-fdee-4a3a-bbae-3c206f9d48cf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003904304s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-p2s25" [01f531eb-d3d0-4d9f-9f93-ad7a2d7e57f3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003774014s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-095300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-095300 "pgrep -a kubelet"
I1229 08:01:34.015992  741854 config.go:182] Loaded profile config "kindnet-095300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-095300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-8kdpm" [0461c25a-6839-42f4-b500-55878e55d875] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-8kdpm" [0461c25a-6839-42f4-b500-55878e55d875] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004664087s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-095300 exec deployment/netcat -- nslookup kubernetes.default
E1229 08:01:43.291879  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/addons-707536/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m20.265123056s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1229 08:02:38.317561  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:02:47.921380  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:03:06.004137  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/old-k8s-version-512867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (57.968419292s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-095300 "pgrep -a kubelet"
I1229 08:03:08.027098  741854 config.go:182] Loaded profile config "custom-flannel-095300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-095300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-wgv25" [599ea22b-3f70-4bc0-8983-d1488ba7743d] Pending
helpers_test.go:353: "netcat-5dd4ccdc4b-wgv25" [599ea22b-3f70-4bc0-8983-d1488ba7743d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-wgv25" [599ea22b-3f70-4bc0-8983-d1488ba7743d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004479335s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-mtvm4" [501f3889-29a9-4346-871a-b675e23a062d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003586695s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-095300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-095300 "pgrep -a kubelet"
I1229 08:03:21.946833  741854 config.go:182] Loaded profile config "calico-095300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-095300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-58fzq" [56f6e990-151e-4f6f-be70-556b45e0fa41] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-58fzq" [56f6e990-151e-4f6f-be70-556b45e0fa41] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00374715s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-095300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1229 08:03:51.247214  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:03:51.252461  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:03:51.262724  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:03:51.283015  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:03:51.323290  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:03:51.403599  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:03:51.564468  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:03:51.885441  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:03:52.526212  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:03:53.806378  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m15.324131433s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1229 08:04:01.499305  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:11.740391  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 08:04:32.220892  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/default-k8s-diff-port-358521/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (53.365506669s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-558z4" [7637fdd5-d67d-40f6-99e1-b749e7623cc7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003660835s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-095300 "pgrep -a kubelet"
I1229 08:04:57.974016  741854 config.go:182] Loaded profile config "enable-default-cni-095300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-095300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-ck967" [2a798eb3-2367-4c4e-81f6-5002e8c32819] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-ck967" [2a798eb3-2367-4c4e-81f6-5002e8c32819] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004667944s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-095300 "pgrep -a kubelet"
I1229 08:04:59.394547  741854 config.go:182] Loaded profile config "flannel-095300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-095300 replace --force -f testdata/netcat-deployment.yaml
I1229 08:04:59.834781  741854 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mdr25" [27428427-032f-4f9c-baa2-9836ff61952b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1229 08:05:04.074896  741854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-739997/.minikube/profiles/no-preload-822299/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-mdr25" [27428427-032f-4f9c-baa2-9836ff61952b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00399248s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-095300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-095300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-095300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m12.829785357s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-095300 "pgrep -a kubelet"
I1229 08:06:49.093475  741854 config.go:182] Loaded profile config "bridge-095300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-095300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-n44nw" [4fcd0bc2-b402-4a21-90c8-feb0c9f47d03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-n44nw" [4fcd0bc2-b402-4a21-90c8-feb0c9f47d03] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003758246s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-095300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-095300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-846173 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-846173" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-846173
--- SKIP: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-062900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-062900
--- SKIP: TestStartStop/group/disable-driver-mounts (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-095300 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-095300

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-095300

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-095300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-095300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-095300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-095300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-095300

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-095300

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-095300

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-095300

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-095300

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-095300" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-095300" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-095300

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-095300"

                                                
                                                
----------------------- debugLogs end: kubenet-095300 [took: 3.30170135s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-095300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-095300
--- SKIP: TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-095300 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-095300" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-095300

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-095300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-095300"

                                                
                                                
----------------------- debugLogs end: cilium-095300 [took: 3.70720502s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-095300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-095300
--- SKIP: TestNetworkPlugins/group/cilium (3.86s)

                                                
                                    
Copied to clipboard